Combining Inference, Real-Time Robotics, Machine Learning, and AI at the Embedded Edge with PHYTEC and TI Sitara™ Platforms
At the beginning of January a few of my colleagues and I attended CES. We were blown away with how prevalent AI, Machine Learning, millimeter wave, and 5G were at the show. Most of us, in the industry, feel entrenched with the importance of these trendy technologies but we were surprised to see them start to turn to a mainstream audience (consumers). We did notice many of the demonstrations and applications were running on power-hungry X86 systems, cloud compute systems, and custom built silicon. In fact, we rode in a self-driving Lyft and the first thing I noticed was the considerably loud fan noise of whatever machine was in the trunk. One can admire the massive compute power of these systems but at the same time also question their efficiency and physical scalability (scaling down, not up). Many silicon vendors, such as Texas Instruments, are trying to address this concern. Moving intelligence to the ‘Embedded Edge’ allows reduced data transferred over networks, reduced power consumption, distributed, and balanced computing.
Creating the demo
Here at PHYTEC, we felt there is a need for a demonstration of all of these technologies interacting together on one efficient system. Our idea was to use one advanced application processor, capable of accelerating all the specialized tasks required by these new technologies, to control every aspect of the system. Texas Instrument’s Sitara® AM57x application processor provides a robust dual-core arm Cortex®-A15, discrete C66 DSPs, discrete PowerVR SGX54x GPUs, TI’s Embedded Vision Engines (neural network accelerators), dedicated image processors, and dedicated real-time communication processors. PHYTEC’s phyCORE-AM57x system on module utilizes the Sitara™ AM57x processor and provides an easy solution for development of our demo. So, with hardware in mind, we set off to figure out what we should build.
Now, the team here at PHYTEC didn’t want to make just any demonstration! We wanted to make it fun and interactive. Somehow, at the office, the idea of a robot playing Connect-Four® came up… To our surprise there happens to be a version of a Connect-Four® game using balls that you try to throw into the columns instead of pucks. The balls, compared to pucks, would make it much easier for a robot to handle. However it came to be, it was decided that we will make a human vs machine version of Connect-Four®.
So after a few months (we had started this prior to CES) of serious 3D modeling and printing, software design, and embedded system design, we ended up with our Connect Four® demo, code named Connect-0100!
I made a short film that tries to explain the various components, how each part is being accelerated (some of the acceleration isn’t implemented yet, but we will get there), and an overall flow of the demo.
Come see the demo!
We are unveiling our demo at ATX 2020 and Embedded World 2020. Stop by either show to check it out and try to BEAT THE MACHINE!
Stop by booth #3399 at ATX or in hall 1, booth #438 at Embedded World!
Future blog posts on this subject
I plan to make this blog a multi-part series. Stay tuned for more posts that will go into detail. In the future we will cover:
- How we used six, yes, six, 3D printers to get the job done
- How to develop and run a CNN-based model on an embedded system
- Offloading software processes to a real-time co-processor
And hopefully more !
Stay tuned by signing up for blog-post notifications!