NetBSD-SoC: Display control and acceleration
What is it?
This project is to create a unified system for display control and drawing operations under NetBSD. (Previously it was to be a wscons port of xorgs 'kdrive' X server, however it then transpired that kdrive was pretty dead).
Almost all modern computers have some form of display hardware capability - however in the past the only applications that have been able to use this have been X and other display managers, which include their own hardware drivers and operate them from userland. Any application that wishes to perform any basic drawing operations must use one of these display managers, or write their own hardware driver.
This has changed over time, for example Linux's framebuffer console drivers allow the use of framebuffer devices and acceleration features to provide a speedy, high resolution console. More recently kernel mode setting has been implemented, allowing applications to change display resolutions from userland.
This project wishes to implement a form of kernel mode setting, and framebuffer console acceleration, as well as exporting acceleration functionality to userland. The primary goal of this is to allow userland applications to configure the display and perform accelerated drawing operations, without the need for device specific application code. This is likely to be of use to embedded systems, by allowing programmers to operate and debug their code on their development machine instead of on the target device.
Rather than create an all-singing all-dancing display system, the userland portions should be as simple as possible (for example only bitmap management, bitblt operations), to then be plugged into an existing display manager, such as Qt/Embedded or DirectFB.
I've completed a working sample device driver for my own hardware (Intel 915GM chipset laptop), and implemented mode enumeration/setting ioctls. I've also got the beginnings of an acceleration API, which at the moment is able to provide enough support to get an accelerated framebuffer console working.
- April 21, 2009: Community Bonding Period -- Students get to know mentors, read documentation, get up to speed to begin working on their projects.
- May 23, 2009: Students begin coding for their GSoC projects; Google begins issuing initial student payments
- July 6, 2009: Mentors and students can begin submitting mid-term evaluations.
- July 13, 2009: Mid-term evaluation deadline; Google begins issuing mid-term student payments provided passing student survey is on file.
- August 10, 2009: Suggested 'pencils down' date. Take a week to scrub code, write tests, improve documentation, etc.
- August 17, 2009: Firm 'pencils down' date. Mentors, students
and organization administrators can begin submitting final evaluations to
- August 24, 2009: Final evaluation deadline; Google begins issuing student and mentoring organization payments provided forms and evaluations are on file.
Deliverables / Must-have components:
- Kernel layer to provide wsdisplay display mode enumeration and configuration (done)
- Pass-through facility to allow framebuffer console raster ops to be translated and take advantage of 2D acceleration (done, sort-of)
- Userland library to interact with wsdisplay and help device configuration (ioctls implemented)
- Userland library for 2D acceleration, and to abstract any device specific operations away from the user
- Sample driver for all of the above on my own hardware (i915 laptop) (done)
- Backend for a piece of window management software (for example nano-x), allowing it to use this project for display.
Optional (would-be-nice) components:
- Backend to allow Qt/Embedded or perhaps even X to use this project
- Sample drivers for different hardware - I have some ancient thin clients, as well as a beagleboard which will shortly get an LCD attached
Technical Details - Design
The first area of interest is the mode setting functionality - Linux currently supports this via a series of IOCTLs to drm, allowing the user to enumerate and set the crtc settings. This is all good, however this information doesn't contain any additional properties of the display, and any additional information must be passed through using (in my opinion, unwieldy) GETPROPERTY/SETPROPERTY ioctls. In addition all the displays attached to the system (or at least, a particular card) are controlled through that drm device, rather than distinguishing one display from another.
It would make more sense to use netbsd's existing proplib property API to enumerate display modes, with mode setting implemented just selecting one of the enumerated modes. This has the added benefit that (trivially) the display can only be configured in a mode that it supports. This functionality can then be delivered through the existing wsdisplay interface in NetBSD - this raises the possibility of trivial multi-head configuration through wscons!
The next interesting topic is acceleration - on most hardware, 2D drawing acceleration is supported in the same way as 3D acceleration, through a series of device specific blitter commands that are inserted into either a queue or ringbuffer of commands. It would make sense to support userland access to acceleration in the same way, using the already-established drm interface. However, this requires that the user have device specific knowledge to formulate the correct commands, so this project may need to become even more like dri/mesa by creating a device independant 2D drawing API, which then plugs into a device dependant library.
This approach has the added benefit that the same device dependant drawing code can be re-used in the kernel to support console framebuffer acceleration, in a manner similar to how proplib is shared between userland and kernel.
The net effect of this, is that developers should be able to write a device driver in the kernel, a library to support drawing acceleration just once, then have support for all drawing systems without any more code being written.
| Jeremy Morse <email@example.com> |
| $Id: index.html,v 1.4 2009/07/13 11:58:08 j_morse Exp $ |