GUIs for Small Devices
GUIs for Small Devices—Expanded Functionality in a Tighter Space
The functionality of mobile and handheld devices keeps growing and well-designed graphical interfaces are meeting the challenge, presenting users with a familiar mode of operation. Now a tool specifically designed for such needs is available.
BY TOM WILLIAMS, EDITOR-IN-CHIEF
Page 1 of 1
There was a time, but that time is past, when graphical displays in embedded systems were relatively rare and touchscreen user interfaces were completely unheard of. Today, of course, we have become accustomed to interacting with devices in home and industry through a graphical user interface (GUI). It is becoming almost a reflex. Who among us has not at one point sat in front of a laptop running Windows 7 and reached out with a finger to make the screen scroll? C’mon!
We are witnessing a convergence of trends. One is a blurring of distinction among embedded devices that are increasingly showing up in consumer products—from smartphones, tablets and home appliances to industrial control systems and a vast array of handheld mobile devices. Another is the increasing richness in features of many small systems. There is literally not enough physical room on the surface of these things to fit all the buttons and switches that would be needed to operate them.
The solution of a graphical GUI seems natural to a population of users already accustomed (if not addicted) to the smartphone/tablet touchscreen world. This has come to be known as the “iPhone effect” and is influencing GUI design across the board. This has also spawned the “bring your own device” (BYOD) phenomenon where access to a company’s applications and even to control of medical and factory equipment is enabled from an employee’s smartphone or tablet (Figure 1).
The functions displayed on this medical device may not be the only ones that it supports. A hierarchy of widgets and controls can be accessed to sequence through a wide variety of available functions. The display data is linked to the application logic via APIs that can transmit data, commands and resulting displays between screen and code.
Until now, most GUI development tools have been oriented toward the desktop and embedded solutions have been derived from these, and that has meant a fairly big footprint in terms of memory and resource usage. Now, however, Express Logic has introduced a low-overhead runtime engine and development tool called GUIX, which runs on Express Logic’s ThreadX RTOS, and a PC-based development suite called GUIX Studio. GUIX is initially targeted for the ARM Cortex-M class of processors and the mid-range Texas Instruments and Renesas processors. Eventually, of course, the plan is to adapt it for all the CPUs that Express Logic supports.
GUIX runs natively on the processor and is compiled from ANSI C source code. Its central function is to write graphical data to a block of display memory and to pass events back to the application’s event handler. Inputs can be touchscreen or pen-down events, which GUIX passes to the application. It would simply communicate, for example, “User picked entry #32.” The application developer must decide what action is to be taken. The application will then do whatever operations are required and pass pixel data on the results back to the GUIX API layer to write the appropriate pixel data to the display memory.
Of course there is a wide variety of different LCD displays, but they mostly operate by reading pixel data in their memories (or from system memory), interpreting it and putting the resulting colors at the appropriate pixel locations. GUIX supports all the common formats from 1-bit per pixel up to various 24- and 32-bit pixels with RGB, BGR, etc., by supplying drivers for the different formats to write that pixel data to display memory.
The data flow in GUIX involves input drivers such as touchscreen or timer drivers that invoke GUIX APIs to inject events into the framework. Application threads also invoke GUIX APIs to create and display screens according to the internal processing of the application. GUIX widgets react to user input to generate signals that are routed back to application event handlers. A hardware-specific, optimized graphics driver writes the pixel data to display memory from where it is rendered to the screen by external or onboard dedicated hardware, depending on the specific display used (Figure 2).
The GUIX data flow diagram.
We already mentioned the lack of surface area for all the buttons some devices would need. Of course, there is not much surface area on a small device display either. However, a GUI can deal with almost any level of complexity by implementing a display hierarchy where, for example, selecting a certain widget would lead to a screen displaying a variety of options and operations associated with the choice. A good part of the art of designing GUIs is to set up and organize a logical hierarchy and to manage it via the application.
GUIX provides the ability to transition between screens and can stack screens instead of having to regenerate the pixel data each time. The ability to stack screens is, of course, dependent on resources, such as how much RAM you want to devote to stacked screens. It can perform much faster if, for example, you are drilling down in a menu from screen to screen.
A Desktop Development Tool
While it is possible to define widgets and screens in code using GUIX, it would be quite time-consuming. A desktop development tool called GUIX Studio allows rapid prototyping of designs in a WYSIWIG environment and can then generate the code to be dropped into the application. GUIX studio lets you specify things like button size, shape, color, position, etc., and the function that gets called by a given widget. It also lets you define and organize the display hierarchy so that it can be quickly mated to the application’s API. In this sense, it is keeping the display data separate from the application code. The data is married to the code using GUIX studio in the course of developing the application.
In addition, GUIX Studio has integrated font generation that lets developers generate their own in monochrome or anti-aliased formats. Fonts can include any set of characters including Unicode characters. In addition, developers can import graphics from JPG, BMP or PNG files and convert them to compressed GIUX pixel maps. There are also widget types available that can incorporate proprietary graphics for custom look and feel. It is also possible to customize existing stock GUIX widgets.
The GUIX Studio environment running on a PC is able to generate the code that runs on the native processor and can be used in two ways. It can be used alone, in which case there is some additional coding involved in mating it to the target application and the ThreadX RTOS. Or it can be used on the PC along with a PC version of ThreadX. Since GUIX is coupled with ThreadX on the target, this means the generated GUI code should match the RTOS with no issues.
Since this is actually all compiled C code, it is possible to run the GUI in the RTOS environment with the application on the PC and have it all work like it will on the target system. This has other advantages. Since it is not necessary to be a programmer to work on designing a GUI with GUIX Studio, other team members involved with product development can get involved with defining the user interface.
Since the code that defines the user interface is completely distinct from that of the application, there is little opportunity for non-programmers to interfere with the application logic. They can, however, help define the system’s functionality by designing screens and widgets and communicating their desired functions to the programming team. The distinction between display data and code also comes in handy when testing for certification and compliance for things such as medical devices. Making changes to the internal processing of the code need not entail making corresponding changes to the display data as long as it is already designed to display the desired inputs and outputs.
San Diego, CA.