Implementing a Transformation in Embedded Display Design - The - TopicsExpress



          

Implementing a Transformation in Embedded Display Design - The ascendency of GPUs began with the introduction of 3D add-in cards in 1995 with widespread adoption of 32-bit operating systems and affordable PCs soon following. Before this the graphics industry was dependent on more prosaic 2D graphics and costly non-PC architectures. 3D and virtualization graphics eventually dominated gaming, military simulators and medical imaging. Today, graphic display reproduction is used in a plethora of applications. Let’s look at how graphic architectures have changed over the years. The 60’s saw architectures based on specific display list instructions that drew lines and objects using specific processor units. This was very resource heavy. In the 70’s scan-convert architectures translated incoming data streams into bitmapped video, laying the foundation for 3D. Atari’s 2600, circa 1977, was based on this architecture. By the 80’s, frame buffer architectures had emerged. One example is ATI’s Colour Emulation card used in Commodore computers during this era. In the 1990’s, an explosion of algorithmic graphics processing occurred. Graphic accelerator cards arrived, such as Silicon Graphics’ OpenGL and Microsoft’s DirectX. The new millennia saw complex graphics processing enter the mobile market, with Nokia N-Gage and Sony’s Playstation Portable. Via OpenGL and DirectX, GPUs added programmable graphics processing. Let’s consider a typical graphics controller solution. The host interface is typically a high pin count parallel bus providing the throughput capacity for sustaining high graphics data. As there is sharing of this data, the memory controller provides memory management whether the physical memory is on or off chip. The GPU core processes graphics data. Traditionally a large frame buffer memory holds complete frames of bitmapped images for display via the display controller. Next let’s look at a particular case; a household appliance GUI requiring sound, touch and display functions, plus low implementation cost. The solution proposed here offers a complete parallel engine for graphics, touch and audio on a single chip. The serial bus acts as the host interface, reducing pin count and packaging costs. A serial bus interface also means that data transfer between the microcontroller and graphics chip is relatively low, so a lower performance microcontroller can be specified. Display list memory stores primitive objects, replacing the large graphics memory. Higher level graphic widgets can also be implemented as display lists, including clocks, keys, gauges, etc. To ensure seamless transition between the microcontroller updating the display list and display controller refreshing the panel, a ping-pong mechanism completes updates on screen through register commands. The graphics engine executes lines, points, rectangles and graphs, as well as alpha blending, masking, shadowing and anti-aliasing operations without microcontroller involvement and allowing significant reductions in memory requirements. The graphics engine executes the display list once for every horizontal line. It processes the primitive objects in the display list and constructs the display line buffer. The horizontal pixel content in the line buffer is updated if the object is visible at the horizontal line. Hence, this architecture dispenses with large frame buffers. Instead of traditional touch scanning and XY coordinate response, invisible tags can be assigned to buttons or keys. For touch events within a tag area, an associated tag number is fed to the microcontroller, saving on processing resources. FTDI’s Embedded Video Engine, with its object-oriented approach, is facilitating the implementation of more sophisticated GUIs while keeping within budgetary constraints. ftdichip/EVE.htm
Posted on: Tue, 17 Jun 2014 11:15:05 +0000

Trending Topics



Recently Viewed Topics




© 2015