Artificial Intelligence is a branch of computer science dealing with the simulation of intelligent behaviour in computers. These processes include learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions), and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision (also referred as computer vision).
AI was coined by John McCarthy, an American computer scientist, in 1956 at The Dartmouth Conference where the discipline was born. Today, AI is an umbrella term that encompasses everything from robotic process automation to actual robotics. It has gained prominence recently due, in part, to big data, or the increase in speed, size and variety of data businesses are now collecting. AI can perform tasks such as identifying patterns in the data more efficiently than humans, enabling businesses to gain more insight out of their data.
The core difference compared to standard methods of data analysis is that AI programs don't linearly analyse data in the way they were originally programmed. Instead they learn from the data in order to respond intelligently to new data and adapt their outputs accordingly. It is this unique ability that means AI can cope with the analysis of big data in its varying shapes, sizes and forms. The concept of AI has existed for some time, but rapidly increasing computational power (a phenomenon known as Moore's Law) has led to the point at which the application of AI is becoming a practical reality. One of the fasting-growing approaches by which AI is achieved is Machine Learning.
Examples of AI technology in the research domain:
Machine learning is the science of getting a computer to act without programming and is the fastest growing approach to achieve AI.
Deep learning is a subset of machine learning that, in very simple terms, can be thought of as the automation of predictive analytics. There are three types of machine learning algorithms: supervised learning, in which data sets are labelled so that patterns can be detected and used to label new data sets; unsupervised learning, in which data sets aren't labelled and are sorted according to similarities or differences; and reinforcement learning, in which data sets aren't labelled but, after performing an action or several actions, the AI system is given feedback.
Machine vision is the science of making computers see. Machine vision captures and analyses visual information using a camera, analogue-to-digital conversion and digital signal processing. It is often compared to human eyesight, but machine vision isn't bound by biology and can be programmed to see through walls, for example. It is used in a range of applications from signature identification to medical image analysis. Computer vision, which is focused on machine-based image processing, is often conflated with machine vision.
Natural language processing (NLP) is the processing of human language by a computer program. One of the older and best known examples of NLP is spam detection, which looks at the subject line and the text of an email and decides if it's junk. Current approaches to NLP are mainly based on machine learning. NLP tasks include text translation, sentiment analysis, semantic network analytics and speech recognition. Pattern recognition is a branch of machine learning that focuses on identifying patterns in data.
[EVENT] Viva Technology 2018
May 24-26 - Do not miss the RDV innovation & leaders of the year! Ipsos is an Accelerate partner of VIVA TECHNOLOGY. New uses, innovative technical solutions (neuroscience, virtual or augmented reality, artificial intelligence ...) make the business of market research even more exciting. Come and talk to our experts and discover how Ipsos takes up the challenge with its teams and partners!