Can Machines Think?

Alan Turing asked, “I propose the question, can machines think?". Can a computer achieve a state of consciousness? Distinguishing a bot from a human has lately become so overwhelming. Suddenly humans sound like bots.

We have always had a way to distinguish bots from real humans but everything has since become so unreal. Recent advancements in technology are outpacing Moore’s Law. We are now living in an age with technologies that maybe did not exist in the realms of magic 50 years ago. The Age of AI.

The Turing Test.

Alan Turing would later come up with a thesis, can a computer mimic human intelligence in certain conditions? Years later this was used to distinguish humans from computers.

Here is how it worked:

Suppose a computer and a person were questioned about a specific topic and their answers evaluated by a human without them knowing which solutions came from either. Would they be able to distinguish between the human and the computer?

When the evaluator could not distinguish between the two, the computer was considered to have Artificial Intelligence.

This worked until it was superseded by the powerful tech we now have, other tests were made to give distinction. These include: The Marcus Test, Lovelace test 2.0, and the Reverse Turing test which involves a human proving to a computer that they are not a computer - examples of these are CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart)

The Rise of Conversational AI

Conv AI, what is it?

Behind conversational AI lies a lot of machine learning, Large volumes of data, and a lot of Natural Language Processing (NLP) to try and mimic humans’ way of communicating. It involves four steps:

  • Allowing user input (Either speech or text)

  • Analyzing input.

  • Dialog management - used to formulate a response that mimics humans.

  • Reinforcement learning to improve the responses.

    But just because something is readable doesn’t mean it makes sense. The best application of this technology is to automate

    repetitive tasks which could include

  • Customer care bot

  • Improving & debugging code

  • Summarizing text.

    This powerful application of AI comes with a set of disruptions:

  • Piracy - since these models are trained on massive data collected online.

  • Low-quality content

  • Content lacks a human touch like blog posts from 2000 that made you feel a human experience - Something AI will probably never achieve.

  • Many times it’s too perfect, easy to understand, etc.

The Risks of Dangerous AI

Elon Musk thinks that AI is more dangerous than a nuclear warhead. If you have no idea how dangerous that is, it’s unfathomable and could potentially kill millions in weeks. Clothes and skin could ignite to a fire, for this reason, it is considered the most potent weapon and largest threat to humanity. In reality, we don’t know what AI is capable of doing, we simply have no idea.

A Letter From Elon & Steve Wozz to All AI developers

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

In an open letter titled “Pause Giant AI Experiments: An Open Letter” to all AI developers Elon Musk, Apple co-founder Steve Wozz and other significant figures sign a letter calling all AI Labs to pause the development of AI systems powerful than GPT-4, since AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

The letter has been signed by 2489 people at the time of writing this article.

Yoshua Bengio, Founder and Scientific Director at Mila, Turing Prize winner and professor at University of Montreal

Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach”

Elon Musk, CEO of SpaceX, Tesla & Twitter

Steve Wozniak, Co-founder, Apple Inc

Elon has always insisted on the regulation of AI development.

Here is part of the letter.

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

The Dangers of AI

Today we can only think about some of what could be thought of as the dangers of AI. They include:

  • Loss of human control, inspired by sci-fi
  • Unemployment and economic disruption
  • Autonomous weapons and warfare
  • Disinformation and manipulation

To prevent this from happening, it is suggested that we have oversight on the development of AI as a bad actor would cause chaos.

The Journey to AI Consciousness

It is still hard to get consciousness as an AI feature, but what is consciousness?

consciousness

/ˈkɒnʃəsnəs/

A person’s awareness or perception of something.

But how can measure consciousness? There is simply no way. It is one of the main reasons atheists believe there is God, it separates us from animals. Could AI achieve this state in the future?

Many animals are conscious since consciousness is not entangled with long-term memory. We might one day achieve a state of consciousness and still argue whether we got to that point. To achieve this maybe we will have to change how models are trained. But how do you train consciousness when it’s not falsifiable and cannot be tested?

To be able to build a plane, you use the concept of a flying bird yet, flipping wings is not added but rather another concept to achieve flight is used. This is the same dynamic reasoning that will be used to achieve AI consciousness but why do we need it anyway? There are many unanswered questions surrounding this topic.

Overall, creating AI consciousness is a complex and challenging goal that will require significant advances in our understanding of the brain.

AGI for the benefit of humanity.

Unlike AI consciousness one day we will get to AGI (Artificial General Intelligence). Read Open AI’s: Planning for AGI and beyond

Conclusion

Creating AI consciousness is a complex and challenging goal that will require significant advances in our understanding of the brain, and while machines may be able to mimic human intelligence and communication, it is still up for debate whether they can truly think like humans.