What is AI? History, Type, Advantage, Disadvantage

What is AI History, Type, Advantage, Disadvantage

AI (Artificial Intelligence) is one of the most popular technology sectors at present. AI technology is making our world more developed day by day. So with the passage of time and digital technology, you need to have at least some idea about Artificial Intelligence or AI.

So I thought, I would share anything with you from my little knowledge about AI By the whole post.

  • What is AI?
  • What is the History of artificial intelligence
  • Who is the Father of artificial intelligence?
  • What are the Characteristics of Artificial Intelligence?
  • What are the Features of Artificial Intelligence?
  • Where is Artificial Intelligence used?
  • Why is primary research not possible through Artificial Intelligence?
  • What is the program used to create Artificial Intelligence?
  • What is the Advantage of using Artificial Intelligence?
  • Difficulty using Artificial Intelligence

You can get ideas about these things. So let’s get out of the world of Artificial Intelligence for a while.

Artificial Intelligence
Artificial Intelligence

What is AI (Artificial Intelligence)?

What is AI (Artificial Intelligence): Computers do not have their intelligence. It can only work in the light of the information stored in itself and the program. Can’t work on his own decision in the light of any problem. Arrangements are being made to insert many issues in the computer decide its own if there is any problem. This is called artificial intelligence or artificial intelligence.

What is the History of AI (Artificial Intelligence)?

Artificial humans capable of thinking initially emerged as storytelling instruments. The idea of ​​trying to create a mechanism to demonstrate practical reasoning probably started with Raman Loll (1300 AD).

With his calculus ratiocinator, Gottfried Leibniz expanded the concept of the mathematical machine (Wilhelm Schickard first did engineering workaround 1823) to conduct operations on ideas rather than numbers. Artificial humans have become commonplace in science fiction since the nineteenth century, such as Mary Shelley’s Frankenstein or Carroll Kepeke’s RUR (Rossum’s Universal Robots).

The study of mechanical or “formal” reasoning began in ancient times with philosophers and mathematicians. The study of mathematical logic coincided with Alan Turing theory of mathematics, which could make mathematical conclusions using a machine, the symbols “0” and “1”.

Through insight. This digital computer could mimic any process of formal reasoning came to be known as the Church-Touring Thesis. The discovery of neuroscience, information theory, and cybernetics increased the potential for researchers to build electric brains. The first work now recognized as AI., a complete “artificial neuron” for McCulloch and Pitts‘ 1943 touring, was the traditional design.

The AI ​​research field was first established in 1956 in a workshop at Dartmouth College. Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT), and Arthur Samuel (IBM) became the founders and leaders of AI research. The newspaper described the program they and their students created as “wonderful”:

  • Winning among computer checkers
  • Solving word problems in algebra
  • Proving logical theories
  • Speaking English

In the mid-1980s, the United States Department of Defense provided extensive research and established worldwide laboratories. The founders of AI are optimistic about the future: Herbert Simon predicted, “The machine will be able to do what a man can do in twenty years.” Marvin Minsky agreed, “In a generation the problem of creating artificial intelligence will be solved.”

In the 1990s and early 21st century, AI began to be used for supply, data mining, medical diagnosis, and other areas. The success was due to the increase in computational capacity (see Moore’s Law).

The solution of specific problems, new relationships between AI and other fields. The greater emphasis is on researchers’ commitment to mathematical methods and scientific standards. Deep Blue became the first computer-controlled chess player to defeat Gary Kasparov, a chess champion, on June 11, 1998.

Advanced statistics techniques (loosely known as deep learning), access to large amounts of information, and rapid advances in machine learning and perception on computers.

Until mid-2010, it used machine learning applications all over the world. A danger! Watson defeated two of the greatest champions. Brad Rutter and K Jennings, by a significant margin in a quiz show exhibition match on IBM Q&A system. Kinet, provides a 3D body-motion interface for the Xbox 360 and Xbox.

One that uses algorithms derived from long AI research, such as intelligent personal assistants on smartphones. In March 2016, Alfago Go won 4 out of 5 games in a match with champion Lee Sedol, becoming the first computer go-systematization system to defeat a professional Go player without handicaps.

At the Future Go Conference 2016, Alfago won three games with KJ, ranked number one globally for two consecutive years.

According to Bloomberg’s Jack Clark, 2015 was a milestone year for artificial intelligence. The number of software projects using AI within Google increased to “sporadic use” in more than 2,700 projects in 2012.

Clark also pointed out that the error rate in image processing has dropped significantly since 2011. He emphasizes this with the rise of cloud computing infrastructure and the growth of affordable neural networks due to research tools and datasets.

Other examples mentioned include developing Microsoft’s Skype system that can automatically translate from one language to another. The Facebook system can describe images to blind people.

Who is the Father of AI (Artificial Intelligence)?

Alan Mathison Turing

Alan Mathison Turing, British scientist and mathematician, father of artificial intelligence. Alan Turing is best known for his work during the heyday of World War II, in breaking the Nazi Enigma code and winning the British war, and eventually building computers.

One of Turing AI’s most notable contributions to the world is the Turing Test, formerly known as “The Copying Game,” which now has a film adaptation. The experiment aimed to determine when an AI system had acquired human-level intelligence – something Turing had always wondered.

John McCarthy

The late John McCarthy is widely respected for his undeniable legacy in AI and computer science. McCarthy is often blamed for coining and defining “artificial intelligence” until Ayre began research at Dartmouth College in 1958.

After that initial presentation, McCarthy continued to work on various AI programming languages, such as Lisp. He is also responsible for the underlying concept behind cloud computing.

What are the Characteristics of Artificial Intelligence?

The ideal feature of artificial intelligence is the ability to reason and take actions that have the best chance of achieving a specific goal. A subset of artificial intelligence is machine learning, which implies that computer programs can automatically learn and adapt to new data without human assistance.

What are Types of Artificial Intelligence?

Artificial Narrow Intelligence (ANI)

Can provide feedback/instructions by analyzing possible solutions on specific topics. For example, if a person is given the task of identifying blood groups instead of identifying blood components, he will not be able to do so. This is the first step and weak method of programming logic.

Artificial General Intelligence (AGI)

This involves the use of artificial intelligence Heuristics technology. Any new environmental data can be collected and analyzed at a fast pace to conclude. This is called the Strong or Human-Level of Artificial Intelligence.

Artificial Super Intelligence (ASI)

This is still the subject of research. In the future, if artificial programs can analyze and rise above humans to provide decisions/instructions, it will be the final stage of artificial intelligence.

Where is Artificial Intelligence Used?

There is no such thing as a computer-based field in today’s world where there is no practical application of artificial intelligence. In cases where artificial intelligence is being used –

  • In medical diagnosis
  • In stock market share transactions
  • Controlling robot activities
  • Possible correct solution of legal problems
  • In aviation
  • Battlefield management
  • Conduct banking activities and stock transactions
  • In creating designs
  • For cybersecurity
  • In video games
  • In smart cars
  • In the case of banking
  • To filter your mail spam
  • To determine the price of your trip to Uber
  • In data center management
  • In Genomics / Sequencing

Why is Primary Research not Possible through Artificial Intelligence?

Primary research is not possible with artificial intelligence because artificial intelligence has not yet reached the level where it can research something new on its own.

Artificial intelligence has not yet become entirely self-sufficient. Maybe one day, he will think like a human being and decide for himself; that day will be possible.

What is the Program used to Create Artificial Intelligence?

A variety of programming languages, such as LISP, CLISP, PROLOG, C / C ++, JAVA, etc., are used to apply artificial intelligence to expert systems and robotics.

What is the Advantage of using Artificial Intelligence?

  • The use of artificial intelligence reduces the likelihood of errors or mistakes to zero.
  • It can be done quickly.
  • Fraud detection is possible on intelligent card-based systems using AI.
  • One of the significant advantages of artificial intelligence is that it can perform risky tasks against humans.
  • A device with artificial intelligence can work without any kind of break.
  • Artificial intelligence is used to make quick decisions.

What is the Difficulty of using Artificial Intelligence?

  • The number of unemployed people in society is increasing daily as they work instead of people with machines with artificial intelligence.
  • Misuse of artificial intelligence can lead to extensive damage.
  • They can’t do anything without the coding that will be given to the machine with artificial intelligence.
  • They can be dangerous because of the wrong programming in artificial intelligence.
  • The use of artificial intelligence can lead to the loss of average human intelligence.

May you Like: 7 Future Technologies: that will take the helm in the Future

Resources:

https://en.wikipedia.org/wiki/Artificial_intelligence
https://www.latestgkgs.com/technology-and-innovation-8440-a
https://natschooler.com/the-rise-of-ai-first-businesses/
https://knowledge.wharton.upenn.edu/article/five-strategies-putting-ai-center-digital-transformation/
https://business.time.com/2012/03/07/ibms-watson-supercomputer-heads-to-wall-street/
https://www.techslang.com/fathers-of-artificial-intelligence-influential-ai-leaders-and-innovators/
https://smallbusiness.chron.com/benefits-advertising-tv-ahead-other-medium-3585.html
https://doortoonline.com/what-does-ai-stand-for/

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
Future Technologies that will take the helm in the Future

7 Future Technologies: that will take the helm in the Future

Next Post
Windows 11 is officially released for the BETA Channel

Microsoft has released a beta version of Windows 11

Total
20
Share