How should an AI explore the moon?

Image for the article titled How should an AI explore the Moon?

Photo: University of Alberta

Rapid advances in Artificial Intelligence (AI) have spurred some leading voices in the industry to do so invitation to a pause for researchincrease the chance of Human extinction driven by artificial intelligenceand even ask for government regulation. At the heart of their concern is the idea that AI could become so powerful that they lose control of it.

But have we missed a more fundamental problem?

Ultimately, AI systems should help humans make better and more accurate decisions. However, even the most impressive and flexible of today’s AI tools, like the large language models behind the likes of ChatGPT, can have the opposite effect.

Why? They have two crucial weaknesses. They do not help decision makers understand causality or uncertainty. And they create incentives to collect huge amounts of data and can encourage a lax attitude towards privacy, legal and ethical issues and risks.

Cause, effect and trust

ChatGPT and other basic models use an approach called deep learning to examine huge datasets and identify associations between factors contained in that data, such as language patterns or links between images and descriptions. As a result, they are great at interpolating, i.e. predicting or filling gaps between known values.

Interpolation is not the same as creation. It does not generate knowledge, nor the insights needed by decision makers operating in complex environments.

However, these approaches require huge amounts of data. As a result, they encourage organizations to assemble massive data archives or sift through existing datasets collected for other purposes. The management of big data involves significant risks in terms of security, privacy, legality and ethics.

In low-stakes situations, predictions based on what the data suggests will happen can be incredibly useful. But when the stakes are higher, there are two more questions we need to answer.

The first concerns how the world works: what is driving this outcome? The second concerns our knowledge of the world: how sure are we of this?

From big data to useful information

Perhaps surprisingly, AI systems designed to infer causal relationships don’t need big data. Instead, they need useful information. The usefulness of information depends on the issue at hand, the decisions we face, and the value we place on the consequences of those decisions.

To paraphrase the US statistician and writer Nate Silver, the amount of truth it is approximately constant regardless of the volume of data we collect.

So what is the solution? The process begins with the development of artificial intelligence techniques that tell us what we really don’t know, rather than producing variations on existing knowledge.

Why? Because this helps us to identify and capture the minimum amount of valuable information, in a sequence that will allow us to untangle cause and effect.

A robot on the moon

Such knowledge-building AI systems already exist.

As a simple example, consider a robot sent to the Moon to answer the question: What does the surface of the Moon look like?

Robot designers can give it a prior belief about what it will find, along with an indication of how much confidence it should have in that belief. The degree of confidence is as important as the belief, because it is a measure of what the robot does not know.

The robot lands and has to make a decision: which direction to take?

Since the robot’s goal is to learn as quickly as possible on the surface of the Moon, it should go in the direction that maximizes its learning. This can be measured by how much new knowledge will reduce the robot’s uncertainty about the landscape or how much it will increase the robot’s confidence in its knowledge.

The robot travels to its new location, records observations using its sensors, and updates its beliefs and associated trust. In doing so he learns about the surface of the Moon in the most efficient way possible.

Robotic systems like this known as active SLAM (Active Simultaneous Localization and Mapping) were first proposed. more than 20 years agoand I’m still a active search field. This approach of constantly gathering knowledge and updating understanding is based on a statistical technique called Bayesian optimization.

Mapping unknown landscapes

A decision maker in government or industry faces greater complexity than the robot on the moon, but the thinking is the same. Their work involves exploring and mapping unfamiliar social or economic landscapes.

Suppose we want to develop policies to encourage all children to succeed in school and finish high school. We need a concept map of what actions, at what times and under what conditions, will help achieve these goals.

Using robot principles, we formulate an initial question: which intervention(s) will help children the most?

Next, we build a draft concept map using existing knowledge. We also need a measure of our confidence in that knowledge.

We then develop a model that incorporates different sources of information. These will not come from robotic sensors, but from communities, lived experiences and any useful information from the recorded data.

Next, based on the analysis informing community and stakeholder preferences, we make a decision: what actions should be implemented and under what conditions?

Finally, we discuss, learn, update beliefs, and repeat the process.

Learn as we go

This is a learning as we get closer. As new information arrives, new actions are chosen to maximize some pre-specified criteria.

Where AI can be useful is in identifying what information is most valuable, using algorithms that quantify what we don’t know. Automated systems can also collect and store that information at a rate and in places where it might be difficult for humans.

AI systems like this apply what is called a Bayesian decision framework. Their models are explainable and transparent, built on explicit assumptions. They are mathematically rigorous and can offer guarantees.

They are designed to estimate causal pathways, to help make the best intervention at the best time. And they incorporate human values ​​by being co-designed and co-implemented by the communities that are affected by them.

We need to reform our laws and create new rules to guide the use of potentially dangerous AI systems. But it’s just as important to choose the right tool for the job in the first place.


Want to learn more about AI, chatbots and the future of machine learning? Check out our full coverage of artificial intelligenceor browse our guides at The best free AI art generators AND Everything we know about OpenAIs ChatGPT.

Sally CrippiDirector of Technology UTS Human Technology Institute, Professor of Mathematics and Statistics, Sydney University of Technology; Alex Fisherhonorary member, Australian National University; Edward SantowProfessor and Co-Director, Human Technology Institute, Sydney University of Technology; Hadi Mohasel Afsharlead researcher, Sydney University of TechnologyAND Nicholas DavisIndustrial Professor of Emerging Technology and Co-Director, Human Technology Institute, Sydney University of Technology

This article is republished by The conversation licensed under Creative Commons. Read the original article.

#explore #moon
Image Source : gizmodo.com

Leave a Comment