Physics: controversial theory argues that the entire universe is a neural network 

0

[ad_1]

The universe could be a neural network — an interconnected computational system similar in structure to the human brain — a controversial theory has proposed.

As created by computer scientists, artificial neural networks are made up of various nodes — equivalent to biological neurons — that process and pass on signals.

The network can change as it is used — such as by increasing the weight given to certain nodes and connections — allowing it to ‘learn’ as it goes along. 

For example, given a set of cat pictures to study, a network can learn to pick out characteristic cat features on its own — and so tell them apart from other animals. 

However, physicist Vitaly Vanchurin of the University of Minnesota Duluth believes that — on a fundamental level — everything we know may be one of these systems.

The notion has been proposed as a way to reconcile areas of so-called ‘classical’ physics with those of quantum mechanics — a long-standing problem in physics.

The universe could be a neural network — an interconnected computational system similar in structure to the human brain — a controversial theory has proposed (stock image)

The universe could be a neural network — an interconnected computational system similar in structure to the human brain — a controversial theory has proposed (stock image)

‘We are not just saying that the artificial neural networks can be useful for analysing physical systems, or for discovering physical laws — we are saying that this is how the world around us actually works,’ Professor Vanchurin wrote in his paper.

‘This is a very bold claim,’ he conceded. 

‘It could be considered as a proposal for the theory of everything, and as such it should be easy to prove it wrong.’

‘All that is needed is to find a physical phenomenon which cannot be described by neural networks. Unfortunately, [this] is easier said than done.’

When considering the workings of the universe on a large scale, physicists use a particular set of theories as tools.

These are ‘classical mechanics’ — built on Newton’s laws of motion — and Einstein’s theories of relativity, which explain the relationship between space and time, and how mass distorts the fabric of spacetime to create gravitational effects.

To explain phenomena on the atomic and subatomic scales, however, physicists have found that the universe is better explained by so-called ‘quantum mechanics’.

In this theory, quantities like energy and momentum are restricted to having discrete, not continuous, values (known as ‘quanta’), all objects have the properties of both particles and waves — and, finally, that measuring a system changes it.

This last point, the essence of Heisenberg’s ‘uncertainty principle’, means that certain linked properties — such as an object’s position and velocity — cannot both be precisely known at the same time, bringing probabilities into play.

While these theories explain the universe very well on their own scales, physicists have long struggled to reconcile them together into a universal theory — a challenge sometimes dubbed ‘the problem of quantum gravity’.

For the two theories to mesh, gravity — described by general relativity as the curving of spacetime by matter/energy — would likely need to be made up of quanta and therefore have its own elementary particle, the graviton.

Unfortunately, the effects generated by a single graviton on matter would be extraordinarily weak — making theories of quantum gravity seemingly impossible to test and ultimately determine which, if any, are correct. 

Instead of trying to reconcile general relativity and quantum mechanics into one fundamental universal theory, however, the neural network idea suggests that the behaviours seen in both theories emerge from something deeper.

In his study, Professor Vanchurin set out to create a model of how neural networks work — in particular, in a system with a large number of individual nodes.

He says that, in certain conditions — near equilibrium — the learning behaviour of a neural network can be approximately explained with the equations of quantum mechanics, but further away the laws of classical physics instead come into play.

‘Coincidence? Maybe, but as far as we know quantum and classical mechanics is exactly how the physical world works,’ he told Futurism.

‘The idea is definitely crazy, but if it is crazy enough to be true? That remains to be seen,’ he added.

The controversial notion has been proposed as a way to reconcile areas of so-called 'classical' physics (including general relativity, which explains how mass and energy distort spacetime to create gravitational effects, as depicted in this artist's impression) with those of quantum mechanics. This 'quantum gravity problem' has been a long-standing problem in physics

The controversial notion has been proposed as a way to reconcile areas of so-called 'classical' physics (including general relativity, which explains how mass and energy distort spacetime to create gravitational effects, as depicted in this artist's impression) with those of quantum mechanics. This 'quantum gravity problem' has been a long-standing problem in physics

The controversial notion has been proposed as a way to reconcile areas of so-called ‘classical’ physics (including general relativity, which explains how mass and energy distort spacetime to create gravitational effects, as depicted in this artist’s impression) with those of quantum mechanics. This ‘quantum gravity problem’ has been a long-standing problem in physics

In addition, he explained, the theory could account for so-called ‘hidden variables’ — unknown properties of objects proposed by some physicists to explain away the uncertainty inherent in most theories of quantum mechanics.

‘In the emergent quantum mechanics which I considered, the hidden variables are the states of the individual neurons and the trainable variables — such as bias vector and weight matrix) are quantum variables,’ Professor Vanchurin said. 

In such a neural network, everything — from particles and atoms to cells and beyond — would emerge in a process analogous to evolution/natural selection, Professor Vanchurin has suggested.

‘There are structures of the microscopic neural network which are more stable and there are other structures which are less stable,’ he told Futurism.

‘The more stable structures would survive the evolution, and the less stable structures would be exterminated.’ 

‘On the smallest scales I expect that the natural selection should produce some very low complexity structures such as chains of neurons, but on larger scales the structures would be more complicated. 

‘I see no reason why this process should be confined to a particular length scale and so the claim is that everything that we see around us — e.g. particles, atoms, cells, observers, etc. — is the outcome of natural selection.’

In his study, Professor Vanchurin set out to create a model of how neural networks work — in particular, in a system with a large number of individual nodes. He says that, in certain conditions — near equilibrium — the learning behaviour of a neural network can be approximately explained with the the equations of quantum mechanics, but further away the laws of classical physics instead come into play

In his study, Professor Vanchurin set out to create a model of how neural networks work — in particular, in a system with a large number of individual nodes. He says that, in certain conditions — near equilibrium — the learning behaviour of a neural network can be approximately explained with the the equations of quantum mechanics, but further away the laws of classical physics instead come into play

In his study, Professor Vanchurin set out to create a model of how neural networks work — in particular, in a system with a large number of individual nodes. He says that, in certain conditions — near equilibrium — the learning behaviour of a neural network can be approximately explained with the the equations of quantum mechanics, but further away the laws of classical physics instead come into play

As to whether the universe-as-neural-network theory could hold merit — the rest of the physics community seems unlikely to be on board.

As Professor Vanchurin told Futurism, ’99 percent of physicists would tell you that quantum mechanics is the main [theory] and everything else should somehow emerge from it’ — a tenet at odds with the notion that it is not fundamental.

In addition, experts in the field of both physics and machine learning have expressed scepticism over the idea — declining to comment on the record.

A pre-print of the researcher’s article, which has not yet been peer-reviewed, can be read on the arXiv repository.

HOW ARTIFICIAL INTELLIGENCES LEARN USING NEURAL NETWORKS

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.

ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.

Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.   

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images

Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.

The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge. 

A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other. 

This approach is designed to speed up the process of learning, as well as refining the output created by AI systems. 

[ad_2]

Source link

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More