Real artificial intelligence (AI) is all about reality and causality, and how it is reflected in digital mentality and cyberspace or virtuality.
There are two classes of Machine Intelligence, Non-Real AI and Real AI.
Non-Real AI refers to the simulation of human intelligence in machines that are programmed to think and act like humans.
The Real AI is all about reality, mentality and causality, and how it is reflected as digital mentality, or machine intelligence, in cyberspace or virtuality, or mixed reality.
Its key domains as interacting universes are:
Actuality (the Physical World, the Universe, the total environment; philosophy, ontology, science, mathematics and technology)
Mentality (Mental or Counterfactual World; cognitive science, psychology, neuroscience)
Virtuality (Digital Data Universe/Virtual World/Mixed Reality; cybernetics, computer science, AI, ML, DL, data science, data analytics, information engineering).
It covers Popper’s three worlds splitting reality into three worlds, 1, 2, 3:
- World 1: the world of physical objects and events, including biological entities
- World 2: the world of individual mental processes
- World 3: the world of abstractions that emerge from and have an effect back on world 2 through their representations in world 1
The Real AI is running causal algos or algorithms as sets of causal rules for solving any complex real world problems or accomplishing tasks.
As such, the Real AI emerges as a global AI platform embracing all sorts and descriptions of Non-Real AI:
Narrow and Weak AI, ML, DL (Deep Neural Networks). It all emulates, mimics, simulates, counterfeits, or fakes synapse-connected brain neurons, some cognitive functions/skills/capacities or some intelligent behavior as running on graphics processing units (GPUs) or processors specialized for AI functions.
Computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing structures/patterns/correlations in the data. And induatrial.AI technology can analyze real world [oilfield and mining data] data, making assumptions, and providing insight, predictions, controls or recommendations and optimization [of energy, waste, raw materials, chemicals, or manpower], but for very narrow specific tasks.
Most computers are exceeding humans in many special tasks, such as self-driving drones, strategic games, or mathematics in general. But they are as good as the training data fed to them, GIGO, garbage in, garbage out, bias in and bias out.
To date, all the capabilities attributed to machine learning and AI have been in the category of narrow AI. 1) an algorithm designed to do one thing (say, identify objects) cannot be used for anything else (play a video game, for example), and 2) anything one algorithm “learns” cannot be effectively transferred to another algorithm designed to fulfill a different specific purpose. For example, AlphaGO, the algorithm that outperformed the human world champion at the game of Go, cannot play other games, despite those games being much simpler.
In image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human, such as digits or faces. Deep learning is currently the most sophisticated Narrow AI architecture in use today with its popular algorithms as convolutional, recurrent, long short-term memory, generative adversarial, or belief networks. Each for each task, as object detection and image classification, speech recognition, voice recognition, time series prediction and natural language processing, machine translation and language modeling, digital photo restoration and deepfake video, or disease detection.
To avoid another AI winter requires a paradigm shift, going beyond the current narrow and weak AI/ML/deep learning/neural network model. A paradigm shift could lead to a step beyond narrow AI towards a strong or “general AI,” also known as artificial general intelligence (AGI).
Strong or General AI; It is conceived as a generally intelligent system that can act and think much like humans, but at the speed of the fastest computer systems. It should also have consciousness, thoughts, self-awareness, sentience, and sapience. As Geoffrey Hinton noted: “There are one trillion synapses in a cubic centimeter of the brain. If there is such a thing as general AI, [the system] would probably require one trillion synapses.”
Some suggest that GPT-3, NLP/NNs with 175 billion parameters, is in the way to an AGI system; others that reinforcement learning agents are the way to general intelligence.
The reward-is-enough hypothesis suggests that agents with powerful reinforcement learning algorithms when placed in rich environments with simple rewards could develop the kind of broad, multi-attribute intelligence that constitutes an artificial general intelligence.
Superhuman AI, or Artificial Superintelligence. “An intellect that is much smarter than the best human brain in practically every field, including scientific creativity, general wisdom and social skills”. The creation of superintelligence, according to some, could result in disaster for humanity, possibly even extinction.
Radically enhanced human brains could be achievable through the convergence of genetic engineering, nanotechnology, information technology, and cognitive science, while greater-than-human machine intelligence is likely to come about through advances in computer science, cognitive science, and whole brain emulation.
“Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So, we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it,” noted physicist Stephen Hawking in 2017, shortly before his death.
Many high-profile figures warn about ASI as being disastrous on a global scale. Tesla and SpaceX CEO Elon Musk has predicted dire consequences, claiming that ASI is potentially more dangerous than nuclear warheads, and has frequently called for greater regulatory oversight on the development of superintelligence. “The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are.” “This tends to plague smart people. They define themselves by their intelligence, and they don’t like the idea that a machine could be way smarter than them, so they discount the idea—which is fundamentally flawed.”