DanODonovan

At Emutex we design and develop high performance software for our customers' embedded systems based products and solutions. We build bespoke solutions for a variety of IT networking, telecommunication, automotive and industrial applications.When required, we also work with our silicon and embedded systems design partners to design and source hardware systems that are optimally tailored to meet our customers' specific application needs.

 

Dan O'Donovan - CTO, Senior Embedded Software Engineer

 

Emutex designed and implemented a mixed analog/digital test station, coupled with a Continuous Integration system, to validate a bespoke embedded firmware solution.

 

 

THE EMUTEX APPROACH

A highly configurable firmware that interacts with a broad variety of peripherals, such as the example described here [link], requires a comprehensive Continuous Integration (CI) testing platform to monitor and verify the correct performance of the Firmware (and associated elements) during the whole software development life-cycle, from early stages of development through to product releases and subsequent maintenance. For an embedded system that interfaces with the physical environmental through use of analog sensors, a challenge is to emulate the range of possible sensor stimuli that can occur as inputs in a way that can be automated.

THE CHALLENGE

Testing an embedded platform that supports a wide range of differing physical inputs and configuration options brings many challenges. For example:

  • Scale of test case permutations: The multiplication of configurations, operating modes, input types and ranges of input values generate a quantity of test cases that can be difficult to cover exhaustively, even with automation. However, it is paramount to cover the widest range possible to assure the reliability and accuracy of the Firmware.
  • Test hardware replication: The test hardware should be easy to replicate, to allow multiple instances to be used in parallel to shorten test times and provide timely feedback to the software developers, and also to allow tests to be reproduced across multiple sites where needed. At the same time, project budget and timeline constraints must also be considered when designing the test hardware to minimise component and development costs.
  • Emulation of analog sensor inputs: Analog sensor inputs must be correctly emulated with reasonable consistency and accuracy, but without resorting to special test chambers to control the physical magnitudes (like humidity, pressure, temperature …). Such test chambers are typically used for high-accuracy testing, calibration or certification towards the end of the product development cycle but are typically not viable for use in the automation of testing during earlier stages of development due to their high cost and need for manual intervention.
  • Automation for use within a Continuous Integration (CI) system: Continuous Integration brings many benefits, especially for complex systems where the breadth of test permutations and hence test effort is significant. Using CI along with a comprehensive suite of consistent, repeatable tests provides prompt feedback to developers for quick resolution of new issues, increases confidence in the stability of the code base, and ultimately leads to quicker project delivery times.
  • Capacity to evolve along with firmware specifications: In an Agile Methodology-based project, changes and extensions in the specification are expected. The test station must be designed with flexibility in mind, both at hardware and software level.

THE APPROACH AND SOLUTION

The core of the test station in this example is a matrix of relays and a PCB which contains a large set of analog components and switches to emulate all the specified connections between sensors and the software under test (SUT). Those elements plus some ancillary ones are all controlled by an UP Board as in the diagram below:

The Assembly

emutex ADI 1

The Continuous Integration system implemented in Emutex is based on GitLab which is running on a dedicated server. It stores the repository with all the source code and resources used in the project, and also provides the CI mechanism as part of the quality control put in place at our company.

The CI system will launch a complete round of tests when one of the following conditions is met:

  • New git tag applied: This is used mainly when something important, like a new feature, has been accomplished or before releasing a new version of the package to the customer.
  • Manual trigger: This is usually when it is needed to check something, like when there is a change in the hardware, a new bug has been found, or some different conditions need to be tested. In most cases, only particular tests are executed and can be done using a specified git branch instead of the usual master branch to evaluate a new change-set prior to merging. That way, the needed tests are executed much faster and risk of disruption to the master branch (and impact to the rest of the team) is greatly reduced.
  • Scheduled nightly trigger: At a selected time each night, the CI system triggers the full set of tests based on the latest version of code on the master branch. The whole process may take several hours, and so is launched during a quiet time when little or no activity is expected on the branch and contention for shared resources (e.g. build server, test hardware) is lowest.

When triggered, the CI system typically executes the following steps:

  1. Static analysis: Checks statically the quality of the source code and the compliance with applicable quality and style guidelines such as MISRA C/C++, ISO 26262, Python PEPs, etc.
  2. Source code building: Builds the source code and creates the binary outputs and candidate release packages to be used later for unit, function and system tests.
  3. Unit and function tests: These tests are typically executed in a pure software simulation environment, possibly on the build server itself or another dedicated machine, i.e. they don't use the hardware test rig depicted above.
  4. System tests: The files needed to execute the system tests, such as a firmware release package, test scripts/applications, and system configurations, corresponding to a tagged version of the source code, are created and sent to the Test Controller to carry out a partial or full suite of system tests on the test hardware. Results and log files from the tests are fed back to the CI system and archived with the source code, and a notification of the results is sent to the software team automatically.

The Test Controller is typically a powerful Single Board Computer (SBC) such as an UP Board with network connectivity and the necessary I/O interfaces (e.g. GPIO, UART, etc) to connect with the DUT. It controls most of the other elements in the diagram above and runs the sequences of operations needed for every test case.

The Analog Sensor Simulator is a key element for analog testing. It simulates the values of many types of sensors, based on their characteristic magnitude: impedance, voltage or intensity. It uses a matrix of digital potentiometers and DACs with voltage or intensity output, connected through analog switches, operational amplifiers and precise references in form of resistors or voltage references. This device, custom made, avoids the need of special closed chambers to precisely adjust the measured values (temperature, pressure, humidity …) on real sensors. Additionally, it can also simulate power supplies to facilitate power consumption measurement and to perform tests under different power supply voltages if required.

As a complement to the above Simulator, the Analog Sensor Bank contains a set of real sensors exposed to ambient conditions to cover specific cases that are very difficult to simulate.

The Digital Sensor Bank includes all the digital sensors to be tested in the project. As the digital sensors are generally more complex in nature these are typically not emulated but instead are used directly in the test hardware configuration to validate the communication protocol and register access implementation in the software. Cases that depend on changes on the physical magnitude at the sensor inputs are separately simulated and covered by the function tests.

The Relays Bank is a large network of relays arranged to be capable of selecting several analog and digital signals and power sources (generated by the Analog Sensor Simulator and the Analog and Digital Sensor Banks) and multiplexing and distributing them to the System Under Test and Reference Instruments. Its exact configuration is controlled by the Test Controller via USB. The relays chosen are of the electromechanical type due to their low on-resistance and nearly infinite off-resistance so that they don’t affect the analog values. They have the drawback of their low speed switching but, to overcome that, the tests are grouped by relay configuration so the switching is done before a set of tests and not for every test case.

The Software Under Test (yellow box in the diagram above) includes the firmware flashed in the System and the API built into the test cases loaded in the Host.

  • Host with API: Several MCU based boards and SBCs can be used as Host. In this case, an UP Board running Linux was used. The Test Controller loads it with a test binary file (which includes the API) and commands it to execute the sequence of tests cases included. When the tests need to be run on a set of hosts, they can be connected in “parallel” using different USB connectors from the Test Controller and sharing the link (if possible) with the System.
  • System with Firmware: The System is the customer hardware which is flashed (e.g. through JTAG or UART connection) with the Firmware to be tested. The System receives the analog and digital signals from the Relays Bank and, from the Host, the configuration and the commands to control the Firmware. The connection with the Host can be conducted with different types of links: UART, USB, SPI …, depending on the type of Host.

Reference Instruments: a set of instruments connected via USB to the Test Controller. They measure ambient conditions (which affect several Analog and Digital sensors) and analog signals coming from the Analog Sensor Simulator and the Analog Sensors Bank (through the Relays Bank). All these measured values are collected by the Test Controller to compare them with the values returned from System and Firmware through the Host. They are also used to measure the power consumption of the System in the different modes of operation.

Typical Operation

emutex ADI 2
  1. The Continuous Integration System, after building and passing static, unit and function tests, sends all the files needed to perform the complete set of system tests to the Test Controller.
  2. Once loaded with all the needed scripts and binaries, the Test Controller flashes the firmware in the System to test it. This firmware is not changed during the tests.
  3. The Test Controller enters in a loop to iterate through the test binaries received – each one checks different aspects of the Host API and the Firmware. The first thing that Test Controller does is to configure the Reference Instruments, so that they collect the data relevant to the current Test Binary from the environment, the sensors and the simulator.
  4. The relays are configured to transmit the proper analog and digital signals to the System and the Reference Instruments for the current test application.
  5. The current Test Binary is copied to the Host SBC or MCU ready to start on command. Then the Test Controller enters in a nested loop to scan through all the simulator configurations supplied for that Test Binary.
  6. The current configuration selected by the loop is loaded in the Analog Sensor Simulator.
  7. The Test Controller enters in another inner loop which scans through test cases for the current test binary and current analog simulation. At this point, everything external to the Software Under Test is already configured. Next, the selected test case is activated by a command to the Host.
  8. Every test case applies a specific configuration to the System under test, which is transmitted to the System from the Host.
  9. When the System under test is configured the Host gives the order to start measuring.
  10. The Firmware samples or reads the signals coming from the Relays Bank, which in turn are generated by the Analog Sensor Simulator and the real Digital or Analog Sensors blocks.
  11. The results from the Firmware are returned to the Host.
  12. The Host returns these results to the Test Controller.
  13. The Test Controller reads the reference values from the Instrumentation. In the diagram above, this operation is happening after receiving the results from the Host, but in other tests it can be read during the sampling performed by the Firmware. The values collected are compared with the Results from the Host and with the expected results that are embedded in the test. The evaluation (pass/fail) is stored in the Test Controller along with all the related information obtained and the loop continues.
  14. After exiting from all the loops, the Test Controller returns all the evaluations, data gathered and results to the CI system which parses and publishes them, mostly in human-readable form. That way the overall evaluation of the current Firmware and Host API can be verified by the team of developers and testers to assess the current level of quality reached and perform the appropriate actions, if needed.

CONCLUSIONS

The test station described in this example, once integrated with the CI system, became key in the progression of the project for which it was designed, helping to cope with the evolution of the specifications, while keeping the Firmware and the Host API nearly bug-free. Design, assembling and programming the test station implied a considerable effort at the beginning of the project, but it paid off quickly when it started to work, even giving benefits beyond the scope of the project for both the development team at Emutex and for the customer. Based on the experience gathered with this project, Emutex is currently developing a set of simple hardware and software components to allow future test stations using them as ready-to-connect modules. The goal is to reduce the overall effort at the beginning of new projects of this nature and to quickly create a modular system based on reliable and proven hardware and software elements.


Questions? Contact us.

 

We're here to help. Contact us and speak with our representatives who will answer any questions you might have.

 

Go To Top