The Big Data of Robotics

Is robotics part of Big Data, or the reverse?

“Big” certainly applies. When NASA said Google was leasing 1,000 acres of California’s former Moffett Field Naval Air Station for more than $ 1 billion, it explained that “a Google subsidiary called Planetary Ventures LLC will use the hangers for research, development, assembly and testing in the areas of space exploration, aviation, rover/robotics and other emerging technologies.”

moffett field hanger one chris figge 640px 14978361993 c30fed1fdb o The Big Data of Robotics

Hanger One at Moffett Field Credit: Chris Figge | Flickr

Robotics has been front page news, and not only about drone strikes. The public imagination has been captivated by the European Space Agency’s comet lander, Philae. Even though the lander concept was hatched almost 20 years ago, and launched ten years back, Philae is a remarkable, compact robot. Not only did it have to fly with Rosetta, and manage a controlled touchdown on an odd-shaped comet, but it must coordinate experimentation and reporting for its ten scientific instruments in a space environment.

One way to think about the current union of Big Data and robotics is to walk through some of the steps needed to build one yourself.

All-Seeing

A super-capable robot needs to see really well. You might want to splurge for the best imaging system available. One candidate might be the DARPA Autonomous Real-time Ground Ubiquitous Surveillance-Imaging System (ARGUS-IS). According to partially declassified reports, the BAE-developed AGUS-IS operates by tethering multiple sensors together into a unit that totals 1.8 billion pixels of resolution capable of capturing video at 12 frames per second. Still more impressive, and more demanding, is the speed at which the image data is sent: 600 gigabits per second.

But that data is very low level, not usable even by your special robot. To process that stream for the purposes of military intelligence takes additional software, which the project team calls Persistics. Persistics would give your robot a serious field of vision: a 10 square mile area, concurrently viewable in up to 65 independent tracking windows.

Processing with techniques such as the Hough transform are needed to recognize features in raw visual image, so ample computing power will be needed. Lots of processors. Why not? Assuming you can power them while your robot perambulates.

All-Sensing

Your robot needs to make sense of the data streams received across a battery of sensory subsystems. Some may provide visual data, but there’s much more to your robot’s real time information needs than the visual.

Make your robot IoT-aware. A robot built today should be ready to interface with the Internet of Things (IoT) . If you have the DIY gene, the Raspberry Pi B+ is another way to gain horsepower and gather other instrument streams. The B+ has wired ethernet, four USB ports, a microSD slot that gets you a Linux-capable device, half a gig of RAM, a Blu-Ray-capable GPU, HDMI output and an ARM chip with floating point running at 700mhz. Depending on your budget, you can invest in one or a thousand of these puppies: they’re only $ 38 from Amazon. Or take advantage of the new Raspberry Pi Model A+. This little guy costs only $ 20, weighs only 23 grams.

Other possible building blocks might include Xbox Kinect or one of Arduino’s products.

Image Credit: LiveLeak | YouTube

You can upsize your sip of viz data streams. Get something the Department of Defense has chosen, like the Urban Robotics PeARL Sensor system, described as “a high-end EO-RGB and Near IR frame sensor for aerial recon and photogrammetric apps.”

But visual information is not the end of the story. What about other kinds of sensor data? Acceleration, displacement, humidity, inclination, load, orientation, power consumption, pressure, strain, temperature, torque, vibration – to name a few. For example, you might want the LORD MicroStrain ENV-Link Mini. Connected to this, your robot can make decisions about whether to change its daily schedule based on light, soil moisture, wind speed, water level or other environmental inputs.

Life-Long Learner

Your robot will need to learn new things. Some of that learning will likely take place on board (on the robot) and some elsewhere (cloud or on-premises, by specialized providers). Using neural nets and other machine learning methods, new patterns within and across sensory systems can be recognized. Of course the processing requirements may be more intense than you realize.

As Romea (2012) suggests, a framework is needed to recognize known patterns and to discover unknowns as the robot operates – and this is his key insight – “for as long as the robot operates.” Turns out this is far from a trivial task. Romea’s project, for example, attempted to automatically process the raw video stream for an entire robot workday. A firm called Spatial Cognition is working on a 3D visualization technique they call Metadata-based Spatial Partitioning, which they believe better leverages parallelism.

Rack-and-Carry or Cloud?

Earlier notions of robots conceived of self-contained, self-powered, fully autonomous machines. There is still a need for such machines. But for a great many applications, the combination of distributed and cloud computing means that a robot project built today might consist of a domain-specific mix of onboard and cloud resources. Need IBM Watson to help with natural language-based processing? Leverage the Watson cloud API. Need to present graphical visualizations of the robot’s current scene understanding? Leverage the SensorCloud’s user interface.

Long To-Do List

Your robot project will have its work cut out for it. So far, no one has built a robot that can clean windows on a skyscraper as well as a person, which is why two window washers dangled precariously for hours outside the 68th floor of 1 World Trade Center when a scaffolding failed.

One place you can jump start your skunkworks robot project is by analyzing machine logs. Syncsort and Splunk have a partnership to facilitate.

Reference
Alvaro Collet Romea. 2012. Lifelong Robotic Object Perception. Ph.D. Dissertation. Carnegie Mellon Univ., Pittsburgh, PA, USA. Advisor(s) Martial Hebert and Siddhartha Srinivasa. AAI3534975.

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.
Want something else to read? How about ‘Grievous Censorship’ By The Guardian: Israel, Gaza And The Termination Of Nafeez Ahmed’s Blog

Syncsort blog