The ‘Golden Age’ of AI and Autonomy Written 13 January 2025
In This Section
Panel Highlights Critical Role of AI and Autonomy on Earth and in Space
By Anne Wainscott-Sargent, AIAA Communications Team
ORLANDO, Fla. – In the future artificial intelligence (AI) and autonomous systems will transform how people and assets are tracked, whether on Earth or in space, noted speakers on an AIAA SciTech Forum plenary on AI and Autonomy last Thursday, 9 January.
Watch Full Session On Demand
Advances in real-time monitoring and connectivity will help first responders act fast, said one panelist, recalling a 2012 Sausalito, California, road fatality caused when a man crashed his car following a heart attack. He was traveling alone at night, with no one aware of his location.
“In a world where we have a fully connected comms system, that plays out very differently,” said Eric Smith, senior principal, Remote Sensing and Data Analytics at Lockheed Martin Space.
Redefining Accident Response
Not only would AI wearable tech proactively monitor the man’s medical condition, it also would alert EMS and even coordinate traffic control systems to ensure the speediest response to his location.
The plenary session highlighted advancements in AI and their applications in simulation, safety, and decision making, as well as how autonomous systems are reshaping the future of space exploration.
“This is a golden age for robotics and autonomy,” noted Marco Pavone, lead autonomous vehicle researcher at Nvidia and an associate professor at Stanford University in the Department of Aeronautics and Astronautics.
His focus is fourfold: 1) develop visual language models for vehicle autonomy architectures, 2) find other ways of architecting autonomous tasks, 3) explore simulation technologies to enable end-to-end simulation of autonomous tasks in a realistic and controllable way, and 4) research AI safety – building safe and trustworthy AI systems, particularly in space systems and self-driving cars.
Pavone also co-founded a new center at Stanford – the Center for AEroSpace Autonomy Research (CAESAR), which was formed to advance the state of the art by infusing autonomous reasoning capabilities in aerospace systems.
“At the center we are looking at AI techniques for constructions tasks for other space systems and we’re even developing space foundation models that take into account specific inputs and outputs,” he said.
Lockheed Martin is using AI in all four domains of its business – Space, Missiles and Fire Control, Rotary Systems, and Aeronautics. The company envisions AI for autonomy in unstructured environments like the surface of the moon or Mars, with multiagent cooperative autonomy for manufacturing and assembly.
Smart Robots Likely to Precede Humans to Mars
“I foresee the first habitable, critical infrastructure on the surface of Mars being constructed by a team of robots using material and tools and high-level instructions that say, ‘Do the following things’ [in preparation] for humans to arrive,” said Smith.
On the ground, autonomy and AI advances will play an important role in land-use monitoring, to manage and coordinate disaster response and asset tracking, and will work even if objects pass under bridges or under cloud cover. Lockheed Martin Missiles and Fire Control has a department called Advanced Autonomy concerned with autonomous ground vehicles.
Better Fire Prediction and Detection
According to Smith, the group is exploring advanced technologies to help firefighters better predict, detect, and fight wildfires. The technology could predict and locate a fire hours before it even starts from a lightning strike. Using the power of AI, Lockheed’s technology could also analyze fire behavior in near real-time to enable fire growth predictions and to deliver persistent communications across multiagency air and land suppression units, so they might respond quicker to a large complex fire. Unfortunately, the technology is only in test mode; it’s not currently helping fight the fires ravaging southern California, said Smith.
Moderator Julie Shah, Department Head and H.N. Slater Professor in Aeronautics and Astronautics at Massachusetts Institute of Technology (MIT), discussed how much the world has changed in the context of AI over the last two decades.
Continually Evolving AI Systems
“When I did my Ph.D., it was on automated planning and scheduling with no machine learning,” recalled Shah. “When I started my career on faculty, I remember a colleague at NASA told me … nothing that learns online will ever fly in space. In the blink of an eye, a few years later, all I did in my lab was machine learning.”
Pavone agreed with Shah that future aerospace missions, especially for space exploration, will need AI systems that can continue to evolve and learn after they deploy.
“Adaptation is needed and so that’s something we are working on,” said Pavone, noting that his lab is collaborating with The Aerospace Corporation on AI systems that can serve anomalies – “How do you use those anomalies to train your system on the ground so that you can still do validation and then improve it?”
Following the panel, Pavone emphasized that foundation models, dark language, and vision language models all provide “several opportunities to rethink how we build autonomous systems.”
He pointed to several breakthroughs in simulation technologies, which will make simulation a powerful tool of autonomous systems.
Aerospace: Lessons from Automotive’s AI Experience
Pavone added that while the application domain he focuses on at Nvidia is primarily automotive (self-driving cars), aerospace researchers can learn from the automotive industry.
“The automotive [industry] has been building AI systems for a while now, and they have built quite a bit of competence in terms of which AI system should be fielded and also how to provide that they are safe and reliable. So, both the methodologies and the safety standards that have been developed by the automotive community could be useful for the aerospace community,” he said.
Forum Attendees Weigh In On AI
Following the plenary, Jorge Hernandez, president of Texas-based Bastion Technologies, said, “Just the opportunity to hear how different organizations are working with AI was fantastic. What Stanford, Lockheed, and MIT are doing is exceptional. We’re all interested in seeing how that will impact us in the future…and we’re all interested getting involved.”
His firm focuses on safety and mission assurance and mechanical engineering, said Hernandez. “We get involved on the risk and analysis side, so how AI plays into that will be an important piece of what we do.”
Rudy Al Ahmar, a PhD student who is completing his aerospace engineering studies at Auburn University’s Advanced Propulsion Research Laboratory this semester, agreed with the panelists – there was a lot of skepticism about AI and machine learning five years ago, but those concerns were addressed within a few years. The same thing has happened with generative AI.
“For a lot of scientists and researchers, it’s not a matter of if they’re going to use AI and machine learning, it’s a matter of when and how they’re going to implement it – whether on a large scale or small scale,” he said.
The doctoral candidate said he hopes to research AI and machine learning integration with computational fluid dynamics (CFD) as a university assistant professor.
“It’s computationally demanding to work on these aerospace applications with CFD. AI and machine learning can reduce the computational cost and make things rapid so you can optimize and study things much, much quicker.”