

From a robotics and information processing perspective, NASA’s missions have grown increasingly ambitious over recent years, most notably through its long-range probes and Mars rovers. But, as Shreyansh Daftry (an AI research scientist for NASA) explained in a recent talk for the 2021 AI Hardware Summit, there’s a long way yet to go – and much of it will (and must) be powered by AI.
“[This is] what I always wanted to do: become a robotics engineer that could build intelligent machines in space,” Daftry said. “But as I started to learn more, I understood that while NASA has been building these really capable machines, they’re not really what I, as a trained roboticist, would call intelligent.” Here, Daftry juxtaposed the 1969 lunar lander and the Curiosity rover, pointing out that they were both human-operated for their landing and operation.
Changing the Human-Led Status Quo
But, he said, that paradigm would need to change – for two main reasons. “First,” Daftry said, “is the difficulty with deep space communications,” which are both severely bandwidth limited and operate with extreme – and, with distance, increasing – latency. With NASA planning for missions to places like Europa, where it would take several hours to receive and respond to a signal, operating predominantly through human control would be prohibitively inefficient.
“The second major reason,” he continued, “is scalability.” He showed a map of all the satellites around the Earth and a diagram of a proposed Mars colony, asking the audience to imagine all of those disparate devices being controlled by their own teams of engineers and scientists. “Sounds crazy, right?” he said. The only way we can scale up the space economy is if we can make our space assets to be self-sustainable, and artificial intelligence is going to be a key ingredient in making that happen.”
Daftry explained that there were a host of areas where AI could make a huge difference: autonomous science and planning, precision landing, dexterous manipulation, human-robot teams, and more. But for the purposes of his talk, he said, he would focus on just one: autonomous navigation.
Leading the Way (Safely) on Mars
“We landed another rover on Mars. Woohoo!” he said. “The Perseverance rover has the most sophisticated autonomous navigation system, ENav, that has ever driven on any extraterrestrial surface.” He showed a video of the system maneuvering around a rocky terrain obstacle course. “However, if you carefully look at the video captions, you’ll notice that the video was sped up fifty times.” The lunar buggy, he said, had done the same thing at a hundred times the speed fifty years ago. “The key difference between what Perseverance does and what was happening here,” he said, “is [that] the intelligence of the rover was powered by the human brain.”
So Daftry and his colleagues have been working on developing autonomous systems that would help the rovers drive more like human drivers. “The current onboard autonomous navigation system running on Curiosity and also on Perseverance uses only geometric information,” he said, noting that the systems fail to capture textural distinctions – which could mean the difference between navigating safely and getting stuck in something like sand. “So we created SPOC, a deep learning-based terrain-aware classifier that can help Mars rovers navigate better.”
“Just like humans, SPOC can tell the terrain types based on textural features,” Daftry continued. The team developed two versions: one that works onboard a rover with six classification types for ground texture and one that operates on orbital images that has 17 classification types. SPOC, Daftry said, was rooted in the “ongoing revolution” in computer vision and deep learning. These deep learning models used, however, were intended for use on Earth – not on Mars.
Stumbling Blocks
“Adapting these models to work reliably in a Martian environment,” Daftry said, “is not trivial.” The team had applied transfer learning to bridge the gap, but there were a few key bottlenecks. First, he said, was data availability.

How SPOC can help rovers see the world, with textural distinctions. Image courtesy of Shreyansh Daftry/NASA.
“We did not have any labeled data to start with,” he explained. “And the fact that we have only about two dozen geologists in the world who have the knowledge to create these labels – as you can imagine, it is really hard to get time from these people to actually do manual labeling.” Even if they had, the cost would have been exorbitant.
So instead, the team started a citizen science project – AI4Mars – that tasked volunteers with labeling images from the rovers instead, successfully labeling more than 200,000 images in just a few months. But the data they started with was also very noisy – so the team turned to synthetic data and simulation.
“We worked with a startup called Bifrost AI who excels at generating perfectly labeled data of complex unstructured environments using 3D graphics,” Daftry said. “The data generated is highly photorealistic and looks just like Mars. In addition, the labels are perfect and noise-free.”
The second bottleneck is hardware limitations, with Daftry comparing the onboard computer on Perseverance to an iMac G3 from the late 1990s. NASA, he said, was looking at a series of solutions to this to enable deep learning models to run onboard rovers, from developing in-house hardware to adapting popular processors like Qualcomm’s Snapdragon to run on another planet.
Third, Daftry said, was system integration and verification. “Once the rover is launched, we do not have the ability to fix things if something goes wrong, so things have to work in one shot.” This was a problem for deep learning. “Deep [convolutional neural network] models, while they work great, are inherently black-box in nature,” he said, “presenting a major bottleneck for system-level validation and proving guaranteed performance.” This, he added, was mostly addressed by “testing the heck out of our system” on Earth in comparable environments.
What’s Next?
“Overall, our long-term vision is to create something like Google Maps on Mars for our rovers,” Daftry said, “so that the human operator can only specify the destination they need the rover to go and the rover uses the software to find its path.” Beyond that, NASA was looking at autonomous navigation for various other kinds of rovers, from deep sea rovers to cliff-scaling rovers.
May 2, 2025
May 1, 2025
- Adastra Named AWS Data Foundation Partner, Helping Organizations Ready Their Data for GenAI
- Treasure Data Achieves Google Cloud Ready – BigQuery Designation
- GigaIO Partners with d-Matrix to Deliver Ultra-Efficient Scale-Up AI Inference Platform
- GridGain Sponsors Leading Data, Analytics and AI Industry Events
- Astronomer Secures $93M Series D Funding to Deliver Unified DataOps Platform for Enterprise AI
- Fivetran Signs Agreement to Acquire Census
- Akka Launches New Deployment Options for Agentic AI at Scale
April 30, 2025
- LogicMonitor Expands AI Observability Platform with Agentic AIOps and New Partnerships
- KNIME Turns Enterprise Data into Action, Demonstrates Custom AI Agents
- Pythian Boosts Global Data and AI Services with Rittman Mead Integration
- Collibra Harris Poll Finds 86% of Data Leaders Cite Privacy as Top Concern Amid AI Adoption
- StarTree Adds AI-Native MCP and Vector Embedding to Power Real-Time RAG and Agentic Apps
- DDN and Nebius Partner to Deliver Scalable AI Infrastructure for Enterprise Applications
- Backblaze Introduces High-Performance B2 Overdrive Cloud Storage for Data-Intensive Workloads
- Acceldata Unveils AI-Driven Anomaly Detection Engine to Automate Data Quality
- BigID Launches AI Data Lineage to Enhance AI Transparency and Governance
- Quobyte Launches Version 4 to Support AI Training and Scale-Out Workloads Across Hybrid Environments
April 29, 2025
- PayPal Feeds the DL Beast with Huge Vault of Fraud Data
- Thriving in the Second Wave of Big Data Modernization
- OpenTelemetry Is Too Complicated, VictoriaMetrics Says
- Google Cloud Preps for Agentic AI Era with ‘Ironwood’ TPU, New Models and Software
- Google Cloud Fleshes Out its Databases at Next 2025, with an Eye to AI
- Slash Your Cloud Bill with Deloitte’s Three Stages of FinOps
- Can We Learn to Live with AI Hallucinations?
- Monte Carlo Brings AI Agents Into the Data Observability Fold
- AI Today and Tomorrow Series #3: HPC and AI—When Worlds Converge/Collide
- The Active Data Architecture Era Is Here, Dresner Says
- More Features…
- Google Cloud Cranks Up the Analytics at Next 2025
- AI One Emerges from Stealth to “End the Data Lake Era”
- GigaOM Report Highlights Top Performers in Unstructured Data Management for 2025
- SnapLogic Connects the Dots Between Agents, APIs, and Work AI
- Supabase’s $200M Raise Signals Big Ambitions
- Snowflake Bolsters Support for Apache Iceberg Tables
- Dataminr Bets Big on Agentic AI for the Future of Real-Time Data Intelligence
- GenAI Investments Accelerating, IDC and Gartner Say
- Dremio Speeds AI and BI Workloads with Spring Lakehouse Release
- New Intel CEO Lip-Bu Tan Promises Return to Engineering Innovation in Major Address
- More News In Brief…
- Gartner Predicts 40% of Generative AI Solutions Will Be Multimodal By 2027
- AMD Powers New Google Cloud C4D and H4D VMs with 5th Gen EPYC CPUs
- Opsera Raises $20M to Expand AI-Driven DevOps Platform
- GitLab Announces the General Availability of GitLab Duo with Amazon Q
- BigDATAwire Unveils 2025 People to Watch
- Dataminr Raises $100M to Accelerate Global Push for Real-Time AI Intelligence
- SAS Partners with Kansas State to Advance AI-Driven Water Management
- SoftServe Partners with Google Cloud to Accelerate Agentic AI and Data Initiatives
- DataChat Achieves Google Cloud Ready – BigQuery Designation
- Denodo Powers Data Fabric for LG U+ Network Division Modernization
- More This Just In…