AI (ARTIFICIAL INTELLIGENCE) DEFINITION-:
Broadly speaking, artificially intelligent systems can perform tasks commonly associated with human cognitive functions — such as interpreting speech, playing games and identifying patterns. They typically learn how to do so by processing massive amounts of data, looking for patterns to model in their own decision-making. In many cases, humans will supervise an AI’s learning process, reinforcing good decisions and discouraging bad ones. But some AI systems are designed to learn without supervision — for instance, by playing a video game over and over until they eventually figure out the rules and how to win.

Strong AI Vs. Weak AI
Intelligence is tricky to define, which is why AI experts typically distinguish between strong AI and weak AI.
Strong AI
Strong AI, also known as artificial general intelligence, is a machine that can solve problems it’s never been trained to work on — much like a human can. This is the kind of AI we see in movies, like the robots from Westworld or the character Data from Star Trek: The Next Generation. This type of AI doesn’t actually exist yet.
The creation of a machine with human-level intelligence that can be applied to any task is the Holy Grail for many AI researchers, but the quest for artificial general intelligence has been fraught with difficulty. And some believe strong AI research should be limited, due to the potential risks of creating a powerful AI without appropriate guardrails.
In contrast to weak AI, strong AI represents a machine with a full set of cognitive abilities — and an equally wide array of use cases — but time hasn’t eased the difficulty of achieving such a feat.
Weak AI
Weak AI, sometimes referred to as narrow AI or specialized AI, operates within a limited context and is a simulation of human intelligence applied to a narrowly defined problem (like driving a car, transcribing human speech or curating content on a website).
Weak AI is often focused on performing a single task extremely well. While these machines may seem intelligent, they operate under far more constraints and limitations than even the most basic human intelligence.
Weak AI examples include:
- Siri, Alexa and other smart assistants
- Self-driving cars
- Google search
- Conversational bots
- Email spam filters
- Netflix’s recommendations
Machine Learning Vs. Deep Learning
Although the terms “machine learning” and “deep learning” come up frequently in conversations about AI, they should not be used interchangeably. Deep learning is a form of machine learning, and machine learning is a subfield of artificial intelligence.

Machine Learning
A machine learning algorithm is fed data by a computer and uses statistical techniques to help it “learn” how to get progressively better at a task, without necessarily having been specifically programmed for that task. Instead, ML algorithms use historical data as input to predict new output values. To that end, ML consists of both supervised learning (where the expected output for the input is known thanks to labeled data sets) and unsupervised learning (where the expected outputs are unknown due to the use of unlabeled data sets).
Deep Learning
Deep learning is a type of machine learning that runs inputs through a biologically inspired neural network architecture. The neural networks contain a number of hidden layers through which the data is processed, allowing the machine to go “deep” in its learning, making connections and weighting input for the best results.
The Four Types of AI
AI can be divided into four categories, based on the type and complexity of the tasks a system is able to perform. They are:
- Reactive machines
- Limited memory
- Theory of mind
- Self awareness
Reactive Machines
A reactive machine follows the most basic of AI principles and, as its name implies, is capable of only using its intelligence to perceive and react to the world in front of it. A reactive machine cannot store a memory and, as a result, cannot rely on past experiences to inform decision making in real time.
Perceiving the world directly means that reactive machines are designed to complete only a limited number of specialized duties. Intentionally narrowing a reactive machine’s worldview has its benefits, however: This type of AI will be more trustworthy and reliable, and it will react the same way to the same stimuli every time.
Reactive Machine Examples
- Deep Blue was designed by IBM in the 1990s as a chess-playing supercomputer and defeated international grandmaster Gary Kasparov in a game. Deep Blue was only capable of identifying the pieces on a chess board and knowing how each moves based on the rules of chess, acknowledging each piece’s present position and determining what the most logical move would be at that moment. The computer was not pursuing future potential moves by its opponent or trying to put its own pieces in better position. Every turn was viewed as its own reality, separate from any other movement that was made beforehand.
- Google’s AlphaGo is also incapable of evaluating future moves but relies on its own neural network to evaluate developments of the present game, giving it an edge over Deep Blue in a more complex game. AlphaGo also bested world-class competitors of the game, defeating champion Go player Lee Sedol in 2016.
Limited Memory
Limited memory AI has the ability to store previous data and predictions when gathering information and weighing potential decisions — essentially looking into the past for clues on what may come next. Limited memory AI is more complex and presents greater possibilities than reactive machines.
Limited memory AI is created when a team continuously trains a model in how to analyze and utilize new data or an AI environment is built so models can be automatically trained and renewed.
When utilizing limited memory AI in ML, six steps must be followed:
- Establish training data
- Create the machine learning model
- Ensure the model can make predictions
- Ensure the model can receive human or environmental feedback
- Store human and environmental feedback as data
- Reiterate the steps above as a cycle
Theory of Mind
Theory of mind is just that — theoretical. We have not yet achieved the technological and scientific capabilities necessary to reach this next level of AI.
The concept is based on the psychological premise of understanding that other living things have thoughts and emotions that affect the behavior of one’s self. In terms of AI machines, this would mean that AI could comprehend how humans, animals and other machines feel and make decisions through self-reflection and determination, and then utilize that information to make decisions of their own. Essentially, machines would have to be able to grasp and process the concept of “mind,” the fluctuations of emotions in decision-making and a litany of other psychological concepts in real time, creating a two-way relationship between people and AI.
Self Awareness
Once theory of mind can be established, sometime well into the future of AI, the final step will be for AI to become self-aware. This kind of AI possesses human-level consciousness and understands its own existence in the world, as well as the presence and emotional state of others. It would be able to understand what others may need based on not just what they communicate to them but how they communicate it.
Self-awareness in AI relies both on human researchers understanding the premise of consciousness and then learning how to replicate that so it can be built into machines.
Artificial Intelligence Examples
Artificial intelligence technology takes many forms, from chatbots to navigation apps and wearable fitness trackers. The below examples illustrate the breadth of potential AI applications.
ChatGPT
ChatGPT is an artificial intelligence chatbot capable of producing written content in a range of formats, from essays to code and answers to simple questions. Launched in November 2022 by OpenAI, ChatGPT is powered by a large language model that allows it to closely emulate human writing. ChatGPT also became available as a mobile app for iOS devices in May 2023 and for Android devices in July 2023.
Google Maps
Google Maps uses location data from smartphones, as well as user-reported data on things like construction and car accidents, to monitor the ebb and flow of traffic and assess what the fastest route will be.
Smart Assistants
Personal assistants like Siri, Alexa and Cortana use natural language processing, or NLP, to receive instructions from users to set reminders, search for online information and control the lights in people’s homes. In many cases, these assistants are designed to learn a user’s preferences and improve their experience over time with better suggestions and more tailored responses.
Snapchat Filters
Snapchat filters use ML algorithms to distinguish between an image’s subject and the background, track facial movements and adjust the image on the screen based on what the user is doing.
Self-Driving Cars
Self-driving cars are a recognizable example of deep learning, since they use deep neural networks to detect objects around them, determine their distance from other cars, identify traffic signals and much more.
Wearables
The wearable sensors and devices used in the healthcare industry also apply deep learning to assess the health condition of the patient, including their blood sugar levels, blood pressure and heart rate. They can also derive patterns from a patient’s prior medical data and use that to anticipate any future health conditions.
MuZero
MuZero, a computer program created by DeepMind, is a promising frontrunner in the quest to achieve true artificial general intelligence. It has managed to master games it has not even been taught to play, including chess and an entire suite of Atari games, through brute force, playing games millions of times.
Artificial Intelligence Benefits
AI has many uses — from boosting vaccine development to automating detection of potential fraud. AI companies raised $66.8 billion in funding in 2022, according to CB Insights research, more than doubling the amount raised in 2020. Because of its fast-paced adoption, AI is making waves in a variety of industries.

Safer Banking
Business Insider Intelligence’s 2022 report on AI in banking found more than half of financial services companies already use AI solutions for risk management and revenue generation. The application of AI in banking could lead to upwards of $400 billion in savings.
Better Medicine
As for medicine, a 2021 World Health Organization report noted that while integrating AI into the healthcare field comes with challenges, the technology “holds great promise,” as it could lead to benefits like more informed health policy and improvements in the accuracy of diagnosing patients.
Innovative Media
AI has also made its mark on entertainment. The global market for AI in media and entertainment is estimated to reach $99.48 billion by 2030, growing from a value of $10.87 billion in 2021, according to Grand View Research. That expansion includes AI uses like recognizing plagiarism and developing high-definition graphics.
Challenges and Limitations of AI
While AI is certainly viewed as an important and quickly evolving asset, this emerging field comes with its share of downsides.
The Pew Research Center surveyed 10,260 Americans in 2021 on their attitudes toward AI. The results found 45 percent of respondents are equally excited and concerned, and 37 percent are more concerned than excited. Additionally, more than 40 percent of respondents said they considered driverless cars to be bad for society. Yet the idea of using AI to identify the spread of false information on social media was more well received, with close to 40 percent of those surveyed labeling it a good idea.
AI is a boon for improving productivity and efficiency while at the same time reducing the potential for human error. But there are also some disadvantages, like development costs and the possibility for automated machines to replace human jobs. It’s worth noting, however, that the artificial intelligence industry stands to create jobs, too — some of which have not even been invented yet.
Future of Artificial Intelligence
When one considers the computational costs and the technical data infrastructure running behind artificial intelligence, actually executing on AI is a complex and costly business. Fortunately, there have been massive advancements in computing technology, as indicated by Moore’s Law, which states that the number of transistors on a microchip doubles about every two years while the cost of computers is halved.

So let us take a look at what possibly could be the future of AI:
1. Reinforcement Learning
Reinforcement learning, in simple words, is an algorithm or programming that uses a system of reward and punishment to train algorithms. A simple example of this can be. Suppose you want to teach your dog to sit. You will tell the dog to sit and at first, the dog will make a random action.

If the action is not what we want then we give a negative reward so that the dog will do that action less. When we get the desired action, we will give a positive reward by giving a biscuit to the dog. This way the dog can be reinforced to learn things and then can make certain decisions.
Humans and most animals learn from past experiences. Reinforcement learning occurs when the machine uses previous data to evolve and learn. The robots of Boston Dynamics, US, have already learned how to do backflips and jump with the use of state of the reinforcement learning.
For that matter, Amazon’s Alexa is learning the flow of conversing by simulating interactions with its users.
2. Drastic Change in Employment Sector
Now that many companies are using robotic arms (like SCARA) in the routine operational aspects of manufacturing (assembly line operations, etc), employees can put more focus on the critical aspects of their jobs. Adidas is planning to start Speedfactory in Europe.
Speedfactory is an entirely robot-enabled manufacturing plant. It aims to reduce errors in manufacturing and shipping time.
However, the increased usage of robots and AI in all fields might mean that companies will start letting go of employees. A recent study shows robots will take over more than 20 million jobs by 2030, thus creating mass unemployment.
One advantage of this could be that robots could take up jobs that pose danger to human life, such as welding. The process of welding emits toxic substances and an extremely loud noise, which is harmful to the health of a human doing that job.
Companies like Faulhaber MICROMO, USA, have even started working on robots that could diffuse bombs.
If you have any queries in the future of AI blog till now, mention in the comment section.
3. Automated Transportation

The thought that someday, we will be sitting in the backseat of our car, and the car will drive itself to places (as per our instructions) is scary yet exciting. Automated transportation is exactly that. It has five levels, which basically represents the extent of autonomy achieved.
- At level 0, the driver is responsible for performing all tasks to drive the car- from applying brakes to changing gear to control the steering.
- Level 1 is driver assistance, where the driver assistance systems support the driver but do not take full control. One such feature is the park assist feature. Here, the driver only takes care of the car’s speed, while the car controls the steering.
- Level 2 is when the car can drive alone, but the driver has to be present in case the system fails. Tesla’s AutoPilot and Nissan’s ProPilot, both provide the steering, acceleration and braking systems, but the driver has to be able to intervene in case of a failure. Here, the driver still needs to be alert and keep an eye on the road.
- At level 3, the driver can entirely disengage from driving. But, the driver has to be present to handle any unforeseen failures. Audi’s A8L can take up full driving responsibility in slow-moving traffic. This was the first car to claim level 3 autonomy.
- We can activate full self-driving mode at level 4 in certain conditions only, like cities and states. They can drive independently, but do require a driver. Google’s Waymo project is one such car, which has been operating in the US driver free for some time now.
- Level 5 is the ultimate level of autonomous transportation, which requires zero human interaction to maneuver. One example of such cars can be a robotic taxi. However, Elon Musk, the CEO of Tesla claims that they will be ready for this level in 2020.
4. Machines are Going to be as Smart as Humans
Vincent C. Muller and Nick Bostrom, two researchers conducted a survey in 2014 to find out when do AI experts think that human-level AI or Artificial General Intelligence will arrive. Surprisingly enough, they found that there is a 50-50 chance that we will achieve human-level AI by 2040.
Author James Barrat, in his book Our Final Invention, wrote, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” The book also states that AI will improvise itself in the coming years to become artificial super intelligence.
He compares AI to nuclear fission, both simultaneously destructive and illuminatory. (Nuclear Fission is the scientific process of splitting up the nucleus of an atom into two or more parts.)
He also believes that if humans are unable to behave in a certain manner in the presence of AI, we might have to live with whatever it comes up with.
Did you know?
Great Filter, in the context of the Fermi Paradox, states that the reason why extraterrestrial beings, or aliens, have not yet reached the earth because this filter finishes off such civilizations before they could contact us.
There is a possibility that before the aliens could build spaceships that could travel between planetary systems, they invented AI, which ultimately led to their downfall.
5. Generative Adversarial Networks
GAN stands for Generative Adversarial Networks. Scientists and researchers regard GAN as an extension of reinforcement learning.
A Generative Adversarial Network is a network wherein two neural networks compete with one another and have the ability to capture, analyze, and copy the trends and variations within a dataset.
One example of GAN can be bitmojis. A bitmoji is essentially a personalized cartoon avatar that is created to look just like the user. Here, the bitmoji app translates an image to a cartoon that has similar properties to the image.

One can either choose to simply submit their face photo to the app and wait for it to translate it into an emoji or can choose eyes, nose, lips, etc that is the most similar to their facial features from a given set of options.
This technique can prove useful in criminal identification, in which eyewitnesses or policemen can create an avatar of the criminal by choosing from a set of options.
This is easier as compared to the regular sketches, which can be quite stressful as drawing an image on the basis of plain oral descriptions might not be fully accurate.
This technique also allows the cartoon avatar to be translated into a photo again, which looks closely like the face.
6. Reach the Next Level in Science
Popular writer William Gibson quote, ‘The future is here. It’s just not evenly distributed yet.’ In 2018, a robot Eve developed by the scientists of Manchester, Aberystwyth, and Cambridge Universities recently discovered that a certain ingredient in toothpaste is helpful in curing drug-resistant malaria.
This really proves that the use of AI is not only going to boost development in science but has a much bigger role to play for the greater good.
7. Caring for the Elderly
As AI is growing exponentially, another benefit that the society can derive from it is that of some sort of an attendant for the elderly. Scientists are working on robots that can provide medical care to senior citizens.
As robotics is now moving towards a level that is higher than human efficiency levels, there will come a day when we will find robots reminding our parents/ grandparents to take medicines, or assist them in carrying out tasks involving motor functions.
8. Great Support in Defence
The military industry has been using artificial intelligence for many varied purposes. AI’s autonomy makes them suited for hostile situations where sending humans may not be feasible.
The development of autonomous weapons systems is a focus of military research in many developed nations like the US, Russia, China, the UK, and France. Here are a few military applications of artificial intelligence:
Battlefield Surveillance
Getting up-to-date information about hostile zones is of major importance for a military response. Live updates are often not possible due to the high danger. AI operated drones are already in use by the developed nations like the US and Russia for this purpose.
These drones can surveil hot zones and hostile areas. They provide instant alerts when they notice an anomaly. In certain cases, these drones are also used for pre-emptive strikes or first responses.
Intrusion Detection
Cybersecurity is one of the most promising applications of intelligent computing systems. AI can monitor the state of a network and all the data transaction taking place around the clock.
This makes AI uniquely suited to detect and prevent intrusions to systems and networks with sensitive and classified information. Highly important information and protocols like nuclear access codes can be protected with the help of AI’s proper regulation.
9. New Face of Movie Industry
Many scientific innovations take shape for the first time in science fiction. Artificial intelligence is such a topic that sci-fi movies love and hate. There are just as many movies with good and helpful AI as there are with the evil ones.
Artificial intelligence has been featured in movies for just about a hundred years. The first time we saw an AI as a movie character was in the german movie Metropolis in 1927.
Since then, there have been many AI characters that have portrayed the worries people have about AI like HAL:9000 from 2001:a space odyssey(1968), the Skynet from the Terminator series or Ultron from Avengers: Age of Ultron (2015).
On the other hand, there have been AI characters that are loved by people for there mannerisms and personalities as well as there helpful nature and good deeds like R2-D2 and C-3PO from the Star Wars series and WALL-E from the movie WALL-E (2008).
Here are a few movies that changed the way people thought of AI:
2001: A Space Odyssey – HAM 9000 was different from other movie’s AI in its deceptively human traits. It experienced pride and arrogance. The reason it decided to kill the astronauts accompanying him on his journey was due to his absolute denial of failing.
The movie showed how an AI was created to serve a purpose and could not comprehend failing at any cost. HAM even showed fear when its end was near.
Ghost in a Shell – Ghost in a Shell was an animated movie from the year 1995. The movie’s characters danced on the edge of human and machine. With cybernetically enhanced bodies and completely cyborg characters, the movie changed how people viewed AI and their possible synergy with humans.
I, Robot – I, Robot was released in the year 2005. This movie showed billions of AI operated robots acting as personal and public servants. The AI in the movie was built on ‘three principles of AI’ which prevented them from harming humans. But the outcome still did not turn out that well.
Summary
The above points paint a hazy picture as to what AI could really have in store in the future. But one can definitely say that AI is a violent delight. One thing is clear that it is going to be everywhere, and will be as helpful as electricity.
However, the exploitation of AI could be quite destructive. Anyway, we can’t predict the future. So let’s relax and wait for it while we extensively use our Alexas and assistants.
Here’s hoping that you liked the future of AI blog and in case you have any queries or suggestions, do drop them in the comment section below!