Tag Archive : autonomous vehicles

/ autonomous vehicles

Development of artificial intelligence - a brief history | Blog | Bridged.co

The Three Laws of Robotics — Handbook of Robotics, 56th Edition, 2058 A.D.
1. First Law — A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. Second Law — A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. Third Law — A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Ever since Isaac Asimov penned down these fictional rules governing the behavior of intelligent robots — in 1942 — humanity has become fixated with the idea of making intelligent machines. After British mathematician Alan Turing devised the Turing Test as a benchmark for machines to be considered sufficiently smart, the term artificial intelligence was coined in 1956 at a summer conference in Dartmouth University, USA for the first time. Prominent scientists and researchers debated the best approaches to creating AI, favoring one that begins by teaching a computer the rules governing human behavior — using reason and logic to process available information.

There was plenty of hype and excitement about AI and several countries started funding research as well. Two decades in, the progress made did not deliver on the initial enthusiasm or have a major real-world implementation. Millions had been spent with nothing to show for it, and the promise of AI failed to become anything more substantial than programs learning to play chess and checkers. Funding for AI research was cut down heavily, and we had what was called an AI Winter which stalled further breakthroughs for several years.

Gary Kasparov vs IBM Deep blue | Blog | Bridged.co

Programmers then focused on smaller specialized tasks for AI to learn to solve. The reduced scale of ambition brought success back to the field. Researchers stopped trying to build artificial general intelligence that would implement human learning techniques and focused on solving particular problems. In 1997, for example, IBM supercomputer Deep Blue played and won against the then world chess champion Gary Kasparov. The achievement was still met with caution, as it showcased success only in a highly specialized problem with clear rules using more or less just a smart search algorithm.

The turn of the century changed the AI status quo for the better. A fundamental shift in approach was brought in that moved away from pre-programming a computer with rules of intelligent behavior, to training a computer to recognize patterns and relationships in data — machine learning. Taking inspiration from the latest research in human cognition and functioning of the brain, neural network algorithms were developed which used several ‘nodes’ that process information similar to how neurons do. These networks have multiple layers of nodes (deep nodes and surface nodes) for different complexities, hence the term deep learning.

Representation of neural networks | Blog | Bridged.co

Different types of machine learning approaches were developed at this time:

Supervised Learning uses training data which is correctly labeled to teach relationships between given input variables and the preferred output.

Unsupervised Learning doesn’t have a training data set but can be used to detect repetitive patterns and styles.

Reinforcement Learning encourages trial-and-error learning by rewarding and punishing respectively for preferred and undesired results.

Along with better-written algorithms, several other factors helped accelerate progress:

Exponential improvements in computing capability with the development of Graphical Processing Units (GPUs) and Tensor Processing Units have reduced training times and enabled implementation of more complex algorithms.

Data repositories for AI systems | Blog | Bridged.co

The availability of massive amounts of data today has also contributed to sharpening machine learning algorithms. The first significant phase of data creation happened with the spread of the internet, with large scale creation of documents and transactions. The next big leap was with the universal adoption of smartphones generating tons of disorganized data — images, music, videos, and docs. We have another phase of data explosion today with cloud networks and smart devices constantly collecting and storing digital information. With so much data available to train neural networks on potential scores of use-cases, significant milestones can be surpassed, and we are now witnessing the result of decades of optimistic strides.

  • Google has built autonomous cars.
  • Microsoft used machine learning to capture human movement in the development of Kinect for Xbox 360.
  • IBM’s Watson defeated previous winners on the television show Jeopardy! where contestants need to come up with general knowledge questions based on given clues.
  • Apple’s Siri, Amazon’s Alexa, Google Voice Assistant, Microsoft’s Cortana, etc. are well-equipped conversational AI assistants that process language and perform tasks based on voice commands.
Developments in AI | Blog | Bridged.co
  • AI is becoming capable of learning from scratch the best strategies and gameplay to defeat human players in multiple games — Chinese board game Go by Google DeepMind’s AlphaGo, computer game DotA 2 by OpenAI are two prolific instances.
  • Alibaba language processing AI outscored top contestants in a reading and comprehension test conducted by Stanford University.
  • And most recently, Google Duplex has learned to use human-sounding speech almost flawlessly to make appointments over the phone for the user.
  • We have even created a Chatbot (called Eugene Goostman) that passed the Turing Test, 64 years after it was first proposed.

All the above examples are path-breaking in each field, but they also show the kind of specialized results that we have managed to attain. In addition, such achievements were realized only by organizations which have access to the best resources — finance, talent, hardware, and data. Building a humanoid bot which can be taught any task using a general artificial intelligence algorithm is still some distance away, but we are taking the right steps in that direction.

Bridged's service offerings | Blog | Bridged.co

Bridged is helping companies realize their dream of developing AI bots and apps by taking care of their training data requirements. We create curated data sets to train machine learning algorithms for various purposes — Self-driving Cars, Facial Recognition, Agri-tech, Chatbots, Customer Service bots, Virtual Assistants, NLP and more.


Computer vision and image annotation | Blog | Bridged

Understanding the Machine Learning technology that is propelling the future

Any computing system fundamentally works on the basic concepts of input and output. Whether it is a rudimentary calculator, our all-requirements-met smartphone, a NASA supercomputer predicting the effects of events occurring thousands of light-years away, or a robot-like J.A.R.V.I.S. helping us defend the planet, it’s always a response to a stimulus — much like how we humans operate — and the algorithms which we create teach the process for the same. The specifications of the processing tools determine how accurate, quick, and advanced the output information can be.

Computer Vision is the process of computer systems and robots responding to visual inputs — most commonly images and videos. To put it in a very simple manner, computer vision advances the input (output) steps by reading (reporting) information at the same visual level as a person and therefore removing the need for translation into machine language (vice versa). Naturally, computer vision techniques have the potential for a higher level of understanding and application in the human world.

While computer vision techniques have been around since the 1960s, it wasn’t till recently that they picked up the pace to become very powerful tools. Advancements in Machine Learning, as well as increasingly capable storage and computational tools, have enabled the rise in the stock of Computer Vision methods.

What follows is also an explanation of how Artificial Intelligence is born.

Understanding Images

Machines interpret images as a collection of individual pixels, with each colored pixel being a combination of three different numbers. The total number of pixels is called the image resolution, and higher resolutions become bigger sizes (storage size). Any algorithm which tries to process images needs to be capable of crunching large numbers, which is why the progress in this field is tangential to advancement in computational ability.

Understanding images | Blog | Bridged.co

The building blocks of Computer Vision are the following two:

Object Detection

Object Identification

As is evident from the names, they stand for figuring out distinct objects in images (Detection) and recognizing objects with specific names (Identification).

These techniques are implemented through several methods, with algorithms of increasing complexity providing increasingly advanced results.

Training Data

The previous section explains the architecture behind a computer’s understanding of images. Before a computer can perform the required output function, it is trained to predict such results based on data that is known to be relevant and at the same time accurate — this is called Training Data. An algorithm is a set of guidelines that defines the process by which a computer achieves the output — the closer the output is to the expected result, the better the algorithm. This training forms what is called Machine Learning.

This article is not going to delve into the details of Machine Learning (or Deep Learning, Neural Networks, etc.) algorithms and tools — basically, they are the programming techniques that work through the Training Data. Rather, we will proceed now to elaborate on the tools that are used to prepare the Training Data required for such an algorithm to feed on — this is where Bridged’s expertise comes into the picture.

Image Annotation

For a computer to understand images, the training data needs to be labeled and presented in a language that the computer would eventually learn and implement by itself — thus becoming artificially intelligent.

The labeling methods used to generate usable training data are called Annotation techniques, or for Computer Vision, Image Annotation. Each of these methods uses a different type of labeling, usable for various end-goals.

At Bridged AI, as reliable players for artificial intelligence and machine learning training data, we offer a range of image annotation services, few of which are listed below:

2D/3D Bounding Boxes

2D and 3d bounding boxes | Blog | Bridged.co

Drawing rectangles or cuboids around objects in an image and labeling them to different classes.

Point Annotation

Point annotation | Blog | Bridged.co

Marking points of interest in an object to define its identifiable features.

Line Annotation

Line annotation | Blog | Bridged.co

Drawing lines over objects and assigning a class to them.

Polygonal Annotation

Polygonal annotation | Blog | Bridged.co

Drawing polygonal boundaries around objects and class-labeling them accordingly.

Semantic Segmentation

Semantic segmentation | blog | Bridged.co

Labeling images at a pixel level for a greater understanding and classification of objects.

Video Annotation

Video annotation | blog | Bridged.co

Object tracking through multiple frames to estimate both spatial and temporal quantities.

Applications of Computer Vision

It would not be an exaggeration to say computer vision is driving modern technology like no other. It finds application in very many fields — from assisting cameras, recognizing landscapes, and enhancing picture quality to use-cases as diverse and distinct as self-driving cars, autonomous robotics, virtual reality, surveillance, finance, and health industries — and they are increasing by the day.

Facial Recognition

Facial recognition | Blog | Bridged.co

Computer Vision helps you detect faces, identify faces by name, understand emotion, recognize complexion and that’s not the end of it.

The use of this powerful tool is not limited to just fancying photos. You can implement it to quickly sift through customer databases, or even for surveillance and security by identifying fraudsters.

Self-driving Cars

Self-driving cars | Blog | Bridged.co

Computer Vision is the fundamental technology behind developing autonomous vehicles. Most leading car manufacturers in the world are reaping the benefits of investing in artificial intelligence for developing on-road versions of hands-free technology.

Augmented & Virtual Reality

Augmented and virtual reality | Blog | Bridged.co

Again, Computer Vision is central to creating limitless fantasy worlds within physical boundaries and augmenting our senses.

Optical Character Recognition

An AI system can be trained through Computer Vision to identify and read text from images and images of documents and use it for faster processing, filtering, and on-boarding.

Artificial Intelligence is the leading technology of the 21st century. While doomsday conspirators cry themselves hoarse about the potential destruction of the human race at the hands of AI robots, Bridged.co firmly believes that the various applications of AI that we see around us today are just like any other technological advancement, only better. Artificial Intelligence has only helped us in improving the quality of life while achieving unprecedented levels of automation and leaving us amazed at our own achievements at the same time. The Computer Vision mission has only just begun.