Space News reports, “On June 25, the National Oceanic and Atmospheric Administration (NOAA) launched the fourth and final satellite of the Geostationary Operational Environmental Satellites (GOES)-R program.” The GOES-R team has successfully leveraged “the power of AI and an application called the Advanced Intelligent Monitoring System (AIMS),” to improve both “operational efficiency and mission resilience.”
Full Story (Space News)
Tag: AI
Panelists at ASCEND Discuss AI Challenges and Promise
Space News reports, “Space organizations are continuing to identify promising applications of artificial intelligence, according to speakers at the AIAA ASCEND conference.” At NASA, for example, AI helps aggregate complex datasets from various Earth-observation sensors and illustrate the data through modeling “in ways that are ‘intuitively clear,’” said David Salvagnini, NASA’s chief artificial intelligence officer and chief data officer.
Full Story (Space News)
Roper Says Sixth-Generation Aircraft Will Include AI Co-Pilot
ExecutiveGov reports that Assistant Secretary of the Air Force for Acquisition, Technology, and Logistics Will Roper “said the service branch’s future sixth-generation aircraft will feature an artificial intelligence co-pilot, Breaking Defense reported Friday.” The Next-Generation Air Dominance program “will apply AI as a support platform for human pilots aboard the future aircraft.” Roper “said the service will have to determine how to certify the AI platforms for this envisioned human-AI teaming approach.”
Full Story (ExecutiveGov)
US Air Force Test Marks First-Known Use of AI On Military Aircraft
The Washington Post reports that the US Air Force allowed an AI algorithm “to control sensor and navigation systems on a U-2 Dragon Lady spy plane in a training flight Tuesday, officials said.” The event marks “what is believed to be the first known use of AI onboard a U.S. military aircraft.” Defense officials “touted the test as a watershed moment” in DoD’s attempt “to incorporate AI into military aircraft, a subject that is of intense debate in aviation and arms control communities.” Assistant Air Force Secretary Will Roper said, “This is the first time this has ever happened.”
Full Story (Washington Post)
Rolls-Royce Germany to Undergo Digital Transformation with AI
Aviation Today reports that Rolls-Royce Germany will leverage “artificial intelligence (AI) in a new partnership with Altair for its engineering, testing, and design of aerospace engines to reduce and accelerate certification and design iterations, reduce extensive physical testing, and improve product quality.” Sam Mahalingam, Chief Technical Officer at Altair, told Aviation Today, “Successful proof of concepts (POCs) have shown that machine learning (ML) can predict where you need to ‘simulate,’ drastically reducing the amount of re-certification (simulations) needed.” AI can also “predict where and how testing should be done allowing for more effective testing.”
Full Story (Aviation Today)
The ‘Golden Age’ of AI and Autonomy
Panel Highlights Critical Role of AI and Autonomy on Earth and in Space
By Anne Wainscott-Sargent, AIAA Communications Team
ORLANDO, Fla. – In the future artificial intelligence (AI) and autonomous systems will transform how people and assets are tracked, whether on Earth or in space, noted speakers on an AIAA SciTech Forum plenary on AI and Autonomy last Thursday, 9 January.
Advances in real-time monitoring and connectivity will help first responders act fast, said one panelist, recalling a 2012 Sausalito, California, road fatality caused when a man crashed his car following a heart attack. He was traveling alone at night, with no one aware of his location.
“In a world where we have a fully connected comms system, that plays out very differently,” said Eric Smith, senior principal, Remote Sensing and Data Analytics at Lockheed Martin Space.
Redefining Accident Response
Not only would AI wearable tech proactively monitor the man’s medical condition, it also would alert EMS and even coordinate traffic control systems to ensure the speediest response to his location.
The plenary session highlighted advancements in AI and their applications in simulation, safety, and decision making, as well as how autonomous systems are reshaping the future of space exploration.
“This is a golden age for robotics and autonomy,” noted Marco Pavone, lead autonomous vehicle researcher at Nvidia and an associate professor at Stanford University in the Department of Aeronautics and Astronautics.
His focus is fourfold: 1) develop visual language models for vehicle autonomy architectures, 2) find other ways of architecting autonomous tasks, 3) explore simulation technologies to enable end-to-end simulation of autonomous tasks in a realistic and controllable way, and 4) research AI safety – building safe and trustworthy AI systems, particularly in space systems and self-driving cars.
Pavone also co-founded a new center at Stanford – the Center for AEroSpace Autonomy Research (CAESAR), which was formed to advance the state of the art by infusing autonomous reasoning capabilities in aerospace systems.
“At the center we are looking at AI techniques for constructions tasks for other space systems and we’re even developing space foundation models that take into account specific inputs and outputs,” he said.
Lockheed Martin is using AI in all four domains of its business – Space, Missiles and Fire Control, Rotary Systems, and Aeronautics. The company envisions AI for autonomy in unstructured environments like the surface of the moon or Mars, with multiagent cooperative autonomy for manufacturing and assembly.
Smart Robots Likely to Precede Humans to Mars
“I foresee the first habitable, critical infrastructure on the surface of Mars being constructed by a team of robots using material and tools and high-level instructions that say, ‘Do the following things’ [in preparation] for humans to arrive,” said Smith.
On the ground, autonomy and AI advances will play an important role in land-use monitoring, to manage and coordinate disaster response and asset tracking, and will work even if objects pass under bridges or under cloud cover. Lockheed Martin Missiles and Fire Control has a department called Advanced Autonomy concerned with autonomous ground vehicles.
Better Fire Prediction and Detection
According to Smith, the group is exploring advanced technologies to help firefighters better predict, detect, and fight wildfires. The technology could predict and locate a fire hours before it even starts from a lightning strike. Using the power of AI, Lockheed’s technology could also analyze fire behavior in near real-time to enable fire growth predictions and to deliver persistent communications across multiagency air and land suppression units, so they might respond quicker to a large complex fire. Unfortunately, the technology is only in test mode; it’s not currently helping fight the fires ravaging southern California, said Smith.
Moderator Julie Shah, Department Head and H.N. Slater Professor in Aeronautics and Astronautics at Massachusetts Institute of Technology (MIT), discussed how much the world has changed in the context of AI over the last two decades.
Continually Evolving AI Systems
“When I did my Ph.D., it was on automated planning and scheduling with no machine learning,” recalled Shah. “When I started my career on faculty, I remember a colleague at NASA told me … nothing that learns online will ever fly in space. In the blink of an eye, a few years later, all I did in my lab was machine learning.”
Pavone agreed with Shah that future aerospace missions, especially for space exploration, will need AI systems that can continue to evolve and learn after they deploy.
“Adaptation is needed and so that’s something we are working on,” said Pavone, noting that his lab is collaborating with The Aerospace Corporation on AI systems that can serve anomalies – “How do you use those anomalies to train your system on the ground so that you can still do validation and then improve it?”
Following the panel, Pavone emphasized that foundation models, dark language, and vision language models all provide “several opportunities to rethink how we build autonomous systems.”
He pointed to several breakthroughs in simulation technologies, which will make simulation a powerful tool of autonomous systems.
Aerospace: Lessons from Automotive’s AI Experience
Pavone added that while the application domain he focuses on at Nvidia is primarily automotive (self-driving cars), aerospace researchers can learn from the automotive industry.
“The automotive [industry] has been building AI systems for a while now, and they have built quite a bit of competence in terms of which AI system should be fielded and also how to provide that they are safe and reliable. So, both the methodologies and the safety standards that have been developed by the automotive community could be useful for the aerospace community,” he said.
Forum Attendees Weigh In On AI
Following the plenary, Jorge Hernandez, president of Texas-based Bastion Technologies, said, “Just the opportunity to hear how different organizations are working with AI was fantastic. What Stanford, Lockheed, and MIT are doing is exceptional. We’re all interested in seeing how that will impact us in the future…and we’re all interested getting involved.”
His firm focuses on safety and mission assurance and mechanical engineering, said Hernandez. “We get involved on the risk and analysis side, so how AI plays into that will be an important piece of what we do.”
Rudy Al Ahmar, a PhD student who is completing his aerospace engineering studies at Auburn University’s Advanced Propulsion Research Laboratory this semester, agreed with the panelists – there was a lot of skepticism about AI and machine learning five years ago, but those concerns were addressed within a few years. The same thing has happened with generative AI.
“For a lot of scientists and researchers, it’s not a matter of if they’re going to use AI and machine learning, it’s a matter of when and how they’re going to implement it – whether on a large scale or small scale,” he said.
The doctoral candidate said he hopes to research AI and machine learning integration with computational fluid dynamics (CFD) as a university assistant professor.
“It’s computationally demanding to work on these aerospace applications with CFD. AI and machine learning can reduce the computational cost and make things rapid so you can optimize and study things much, much quicker.”
On Demand Recording Available
AFRL Digital Transformation Champion Urges People to Embrace, Not Fear AI
By Anne Wainscott-Sargent, AIAA Communications Team
ORLANDO, Fla. – If Alexis Bonnell had her way, every person would embrace Artificial Intelligence (AI) fearlessly as a tool that gives them back “minutes for their mission” and enables them to “tackle the toil” of mundane work tasks.
The charismatic former Googler, now serving as chief information officer and director of Digital Capabilities Directorate for the Air Force Research Lab (AFRL), believes technology fails when it fails to serve people.
While AI and generative AI promise to bring new efficiencies to all industries and in many instances, reinvent how work is done, it also is a transformative force that many people fear will take away their livelihoods. According to Bonnell, the way the work world packages and frames AI makes it difficult for people to accept the tool.
The visionary behind AFRL’s digital transformation doesn’t talk or act like a typical government executive. Speaking before a standing-room-only crowd at the 2025 AIAA SciTech Forum, she stood out among the room of business-dress-attired engineers and managers, wearing a red top, dark jeans and star-studded knee-high boots. She donned multiple black rubber wristbands with her favorite AI catch phrases that she gave away as keepsakes to inquisitive attendees following her talk.
Bonnell’s presentation included advice on bringing about necessary cultural change in how workers and managers view AI, using insights of what she’s learned from her team’s rollout of NIPRGPT, AFRL’s AI Research Platform to explore the power of Generative AI technology. Launched in June, NIPRGPT’s base of volunteer users grew to about 80,000 in four months, reported InsideDefense. Interest in access to AI tools across the Department of Defense shows no signs of slowing.
In a June 2024 news release announcing the tool, Bonnell noted that “changing how we interact with unstructured knowledge is not instant perfection; we each must learn to use the tools, query, and get the best results. NIPRGPT will allow Airmen and Guardians to explore and build skills and familiarity as more powerful tools become available.”
To the AIAA SciTech Forum’s technical audience, she cautioned that some of her insights may be wrong in six months and “that’s okay…. We’re in an era where we may not have the time for the right answer, so we have to become comfortable with ‘right for now,’ be willing to learn and pivot,” she said. She added that when she thinks about generative AI, she doesn’t think about it as a source of answers, but “as a source of options.”
In answering why the world is clamoring to AI tools now, Bonnell said it’s important to realize that “we now live in a fundamentally different age” – one where people in leadership roles must make decisions and adapt quickly and pivot as conditions change. Consider that 90% of the world’s data was created in the last three years, with 94% of it what Bonnell called unstructured “deluges.”
A sign of the changing times is also evident in battlefield decision-making trends. In the war between Russia and Ukraine, Bonnell said the time frame for Russia countering Ukraine’s software has shrunk, in some cases, to only two weeks. That kind of speed requires new information tools and the ability to make decisions fast. As a result, “we have to think about our technology differently than we did before.”
Bonnell dislikes the mixed messages people have historically received about AI: “We tell people we trust you with a weapon, with a $100M budget, with a security clearance and lots of sensitive information, but we don’t trust you with ChatGPT. What are we actually telling people?” she questioned. “It’s important that we make people feel like they are enough, that they’ve got this, that they are capable, and that we trust them to use tools in the right way. Our future as humans is constant adaptation, the only group that benefits when we are afraid of our own technology is the adversary.”
The technologist noted that the world is not communicating the value of AI in the right way; instead, the first thing people hear is that it’s really complicated, technical, and hard. “That kind of tells someone, ‘You’re not smart enough.’”
She urged a change in the AI narrative and a recognition that as public servants and military personnel, they are showing up to their jobs to be intentional and responsible.
The AFRL leader emphasized the main job of AI in its first phase of human adoption is to simplify and shave off time of mundane work, so people can gain back “minutes for their mission.” That’s exactly what the coders and developers on the AI Research Platform have realized: they report that they have gotten between 25–85% in productivity return using AI tools, Bonnell said.
Bonnell noted that AI and genAI are fundamentally different than other technologies because of the level of intimacy of knowledge that the tools deliver.
“Users get to collect information and the data that they think is relevant and then they use the tool to have a curiosity-based relationship with that data.”
Bonnell has observed at AFRL that her team is leveraging genAI to create a “knowledge universe” around themselves without needing to ask her for information, a discovery that has prompted her to rethink her role as a leader. She challenged other people in CIO roles to be similarly introspective: “For those of in roles like CIOs, it’s a question of how are we going to show up? Are we going to be a gatekeeper or are we going to be a facilitator? There’s a lot of interesting things this is putting into motion.”
In her case, Bonnell is looking at how she can get out of the way of this curiosity journey. “How do I foster the ability for someone to need me less and be able to have a dynamic relationship with knowledge?”
After the presentation, several attendees expressed their appreciation for Bonnell’s take on the state of AI attitudes, workplace culture, and the need to lead differently.
“I like how she talked about coming from the direction ‘see what we can do here’ instead of from a caution perspective of ‘I don’t know if we can do that’ to an attitude of ‘let’s figure out how we can make this work,’” said Christine Edwards, a fellow of AI and Autonomy at Lockheed Martin, whose work includes providing cognitive assistance for firefighters and looking at how to use AI to improve spacecraft operations.
Edwards also enjoyed Bonnell’s insights about trust and AI. “She said it’s less about whether I trust this new technology and more about ‘do I have the confidence that it’s going to have the performance I need for this particular part of my mission?’ I really like that perspective shift.”
John Reed, chief rocket scientist at United Launch Alliance, said he appreciated that Bonnell provided tools for mitigating some of the fear the workforce has about AI. “That’s helpful to think through the stages and the fact that there are going to be people who are concerned, ‘Is this going to eat my job?’ It’s really an augmentation technology just like machine learning. It’s best employed when it’s done to augment the algorithms we’re doing today to make it more effective,” he explained.
The talk also resonated deeply with Marshall Lee, senior director of business development at Studio SE Ltd., a consulting firm focused on model-based systems engineering (MBSE) training and coaching.
“Us engineers are all about the tool, the technology, the formula, the detail. She’s really addressing the changes in brain chemistry and emotion [necessary] for the adoption of the technology,” said Lee. “She’s actually saying you have to change the psychology of the person first before they are going to adopt the new technology. It’s all about that emotion and behavior change and understanding people, starting with where they are.”
On Demand Recording Available
Early Tests Indicate ChatGPT Could Pilot a Spacecraft Unexpectedly Well
SPACE reports, “In a recent contest, teams of researchers competed to see who could train an AI model to best pilot a spaceship. The results suggest that an era of autonomous space exploration may be closer than we think.”
Full Story (SPACE)