Artificial neural networks (ANNs) are a type of machine-learning algorithm whose structure is similar to that of the human brain. In the same way as other types of machine-learning algorithms, they can solve problems through trial and error without being explicitly programmed with rules to follow.
Artificial intelligence (AI) systems are often called “artificial intelligence” because of their abilities to control self-driving cars, deliver ads, recognize faces, translate texts, and even help artists design new paintings or create paint colors with names like “sudden pine” and “sting grey.”
Although they are much less advanced than science fiction AIs, they can still control self-driving cars, deliver ads, recognize faces, translate texts, and even help artists design new paintings.
What does a neural network do?
Deep learning occurs when multiple layers of virtual neurons are connected. The first neural networks were developed in the 1950s to test theories about how neurons in the brain store information and respond to input. As in the brain, the strength of the connections between virtual neurons affects the output of a deep neural network. But here, the “neurons” are not actual cells, but computer modules connected.
Learning involves tuning these connection strengths through trial and error to maximize the neural network’s performance at solving some problems. In this case, the web might make predictions about new data it hasn’t seen before (supervised learning) or maximize a “reward” function to discover new solutions to a problem (reinforcement learning).
A neural network’s architecture, including the number and arrangement of its neurons, or the division of labor among specialized sub-modules, is typically tailored to the problem.
How come I’ve heard so much about them?
The growth of cloud computing and graphics processing units (GPUs) is key to the rise of neural networks, making them more powerful and accessible. In addition to the availability of large amounts of new training data, like labeled medical images, satellite images, or customer browsing histories, neural networks have also become more powerful.
Additionally, the proliferation of open-source tools has made neural networks accessible to programmers and non-programmers alike. As neural networks’ value in commercial applications becomes more apparent, developers are looking for new ways to exploit their capabilities, including using them to aid scientific research.
What are neural networks suitable for?
They’re great at matching patterns and spotting subtle trends in multivariate data. Furthermore, they progress towards their goal even if the programmer doesn’t know exactly how to solve the problem ahead of time.
Problems with complex or poorly understood solutions can be solved this way. For example, the programmer may not write down all the rules for determining whether an image contains a cat. If enough examples are provided, a neural network can decide for itself what features are important in image recognition. Similarly, a neural network can learn to recognize the signature of a planetary transit without being told which features are important.
It only requires a set of starlight curves corresponding to planetary transits and a set of curves that do not. This makes neural networks a very flexible tool, and the fact that neural network frameworks are available in specialized flavors for tasks such as classifying data, making predictions, and designing devices and systems only adds to their flexibility.
The Large Hadron Collider at CERN has been used in other studies to identify rare, interesting collision events using neural network classifiers. It is also particularly suitable for projects that generate too much data to be easily sorted or stored, especially if an occasional error can be tolerated. The events are flagged for human review when they are of interest.
Another kind of neural network can make predictions based on input data. These networks have, for example, been used to predict the absorption spectrum of a nanoparticle based on its structure after being fed examples of other nanoparticles and their absorption spectra. The same networks are used in chemistry and drug discovery as well, for example to predict the binding affinities of proteins and ligands based on their structures.
Neural networks can also solve design problems when combined with reinforcement learning. Reinforcement learning is maximizing the reward function rather than trying to imitate a list of examples.
By trial and error, a neural network controlling the limbs of a robot might adjust its own connections in a way that maximizes the robot’s horizontal speed. Using another algorithm, you might maximize the ratio of two fragmentation products generated by an ultrashort laser pulse hitting a particular molecule.