Browse By

Artificially Intelligent Physicist Has Monumental Implications

The Department of Physics at the Massachusetts Institute for Technology (MIT) released a report announcing plans for an artificially intelligent physicist. The report, written by Tailin Wu and Max Tegmark, published on November 6, details plans for an AI Physicist capable of learning and storing a multitude of theories to form a knowledge base, allowing it to draw new conclusions from a wide variety of experiments and environments.

The report is a dense piece of computer science and physics writing, but parsed broadly, its authors explain that past efforts at creating an AI physicist have been hampered by an algorithm’s inability to interpret data drawn from a wide array of environments. Previous incarnations of such algorithms have also been poor at communicating intelligible conclusions.

“To address these challenges, we will borrow from physics the core idea of a theory.” The report details the architecture and focus of their AI physicist. Instead of fitting a single, large model to all the data (as is the standard machine-learning paradigm), Wu and Tegmark’s physicist accumulates smaller theories gradually, learning and organizing them.

These theories are allowed to “divide-and-conquer,” giving each its own range to specialize and compete, while also subjecting all of them to the Occam’s Razor strategy, which favors simple theories that explain a lot, but are not overfitted to wide ranges of data.

The structure of the AI is built to favor continued learning based on past conclusions. Similar theories found in different environments are clustered together and then unified under a “mastery theory.” All learned theories are stored in a “theory hub,” which can be drawn on for future reference.

The architecture of the AI’s learning agent is as follows: The theory hub is at the center of the algorithm’s structure. As it encounters a new environment, the algorithm first proposes old theories from the hub as they account for the newly collected data, as well as randomly generated new theories for the rest of the data.

All of these theories are then tested via the divide-and-conquer strategy in general and specific scopes simultaneously. The theory hub then applies Occam’s Razor to simplify all the theories, while at the same time clustering similar ones from a variety of environments to generate master theories. These theories can then be added back into the theory hub, to be recycled and run through again with new data.

If the AI physicist functions as proposed, its potential is immense. A well-built AI is adept at drawing and learning from a wide range of data, over and over, constantly, and fast. Assuming its functionality, this AI physicist will alter the field of physics, and all of the sciences, forever.

Being able to accelerate trial and error thinking at this complex a level, using a rich wealth of data, and translating it to theories designed to be widely applicable and adaptable means that the scientific process itself will move faster than ever before. Research itself is becoming mechanized, and the results may well be jarring.

The sciences will soon come to be dominated by algorithms like these, which means advances in fields like physics are going to come more and more rapidly. Consider the rate of technological growth between the year 1900 and 1999. Consider the rate of technological growth from the year 2000 to 2018. As AI continues to be integrated into our everyday lives, this pace will grow near-exponentially. Considering the ways in which physics seeks to understand and manipulate the fundamental functionality of our universe, and the speed at which an AI physicist can operate, expect to see the world change faster than ever before.

 

Source: Massachusetts Institute of Technology

Leave a Reply

Your email address will not be published. Required fields are marked *