Vishal Dhupar |
Imagine learning how to ride a bicycle! You learn to balance - pedal - ride on a straight line - turn - ride in busy streets - All set!!! It takes
step by step learning & then if you are offered a different bicycle you
would try to apply the “truths” you discovered in your earlier learning process &
quickly pick up the new one too. Our machines so far perform the tasks they are
programmed for and as obedient followers carry out the required job. However,
the new wave of technology is striving to make the machines more intelligent,
to not only seek but offer assistance, to make our decision making better, help
an ageing population store & retrieve memories that fade and much more!!!
Sounds interesting? Conniving? …???
Vishal Dhupar, Managing Director – Asia South at Nvidia
would be discussing Re-Emergence Of Artificial Intelligence Based On Deep
Learning Algorithm as part of the invited keynote on Day 1 DVCon India
2017. Passionate about the subject, Vishal shares the background & what
lies ahead for us in the domain of AI & Deep Learning. Extremely useful
from beginners to practitioners!!!
Vishal your
keynote focusses on AI & Deep learning – intricate & interesting topic.
Tell us more about it?
Curiously,
the lack of a precise, universally accepted definition of AI probably has
helped the field to grow, blossom, and advance at an ever-accelerating pace.
Claims about the promise and peril of artificial intelligence are abundant, and
growing.
Several
factors have fueled the AI revolution which will be the premise of my talk.
Touching upon how machine learning is maturing, and further being propelled
dramatically forward by deep learning, a form of adaptive artificial
neural networks. This leap in the performance of information processing
algorithms has been accompanied by significant progress in hardware and
software systems technology. Characterizing AI depends on the credit one is
willing to give synthesized software and hardware for functioning appropriately and
with foresight. I will be touching upon a few examples of AI
advancements.
How do we
differentiate between machine learning, artificial intelligence & deep
learning?
Machine
learning, deep learning, and artificial intelligence all have relatively
specific meanings, but are often broadly used to refer to any sort of modern,
big-data related processing approach. You can think of deep learning,
machine learning and artificial intelligence as a set of concentric circles
nested within each other, beginning with the smallest and working out. Deep learning
is a subset of machine learning, which is a subset of AI. When applied to a
problem, each of these would take a slightly different approach and hence a
delta in the outcome.
Artificial
Intelligence is the broad umbrella term for attempting to make computers think
the way humans think, be able to simulate the kinds of things that humans do
and ultimately to solve problems in a better and faster way than we
do. Then, machine learning refers to any type of computer program that can
“learn” by itself without having to be explicitly programmed by a
human. Deep learning is one of many approaches to machine learning. Other
approaches include decision tree learning, inductive logic programming,
clustering, reinforcement learning, and Bayesian networks. Deep learning was
inspired by the structure and function of the brain, namely the interconnecting
of many neurons.
Some of the discussions on deep learning are intriguing. Does it lead to machines taking over jobs?
Machines are getting smarter because we’re getting better at
building them. And we’re getting better at it, in part, because we are smarter
about the ways in which our own brains function.
Despite the massive potential of AI systems, they are still
far from solving many kinds of tasks that people are good at, like tasks
involving hand-eye coordination or manual dexterity; most skilled trades,
crafts and artisan- ship remain well beyond the capabilities of AI systems. The
same is true for tasks that are not well-defined, and that require creativity,
innovation, inventiveness, compassion or empathy. However, repetitive tasks
involving mental labor stand to be automated, much as repetitive tasks involving
manual labor have been for generations.
Let me give you an example your calculator is smarter than
you are in arithmetic already; your GPS is smarter than you are in spatial
navigation; Google, Bing, are smarter than you are in long-term
memory. And we're going to take, again, these kinds of different types of
thinking and we'll put them into, like, a car. The reason why we want
to put them in a car so the car drives, is because it's not driving like a
human. It's not thinking like us. That's the whole feature of
it. It's not being distracted, it's not worrying about whether it
left the stove on, or whether it should have majored in finance. It's
just driving.
What are the
domains that you see would see faster adoption & benefits of these
techniques?
In healthcare, deep learning is expected to extend its roots
into medical imaging, translational bioinformatics, public health policy
development using inputs from EHRs and beyond. There is rapid improvements in
computational power, fast data storage and parallelization have contributed to
the rapid uptake of deep learning in addition to its predictive power and
ability to generate automatically optimized high-level features and semantic
interpretation from the input data.
Seems like the
ASIC design flow/process can be equally benefited from these techniques. Your
views on it?
Deep Learning in its elements is an optimization problem.
Its application in any work flow or design process where there is scope for
optimization carries enormous benefits. With respect to the design, fab and
bring up of ICs, deep learning helps with inspection of defects, determination
of voltage and current parameters, and so. In fact, at NVIDIA we carry out
rigorous scientific research in these areas. I believe as we unlock more
methods of unsupervised learning, we’ll discover and explore many more
possibilities of efficient design where we don’t entirely depend of large
volumes of labelled data which hard to get in such a complex practice.
What are the
error rates in execution we can expect with deep learning? Can we rely on
machines for life critical applications?
Deep learning will certainly out-perform us in few specific
tasks with very low error rates. For example, classification of images is task
where models can be far accurate than mortals. Consider the case of language
translation, today machines are capable of such efficient and economic
multi-lingual translation that it wouldn’t just be possible for a person.
[Recently MSFT’s speech recognition systems achieved a word error rate of
5.1%on par with humans] While we look into health care where life critical
decisions are made, deep learning can be used to improve accuracy, speed and
scale in solving problems like screening, tumor segmentation, etc. and not
necessarily declaring a person alive or otherwise!
In all the instance we just saw, state-of-art capabilities
are developed in very specific and highly verticalized applications. Machine
are smarter than us in these applications but nowhere close to our general intelligence
in piecing these inputs together to make logical conclusions. From a pure
systems and software standpoint, we will need guard rails, i.e. fail-safe
heuristics that backup a model when it operates outside the boundaries to keep
the fault tolerance at bay.
This is the 4th
edition of DVCon in India. What are your expectations from the conference?
While the 20th century is marked by the rise and dominance
of the United States, the next 100 years are being dubbed the Asian Century by many prognosticators. No country is driving this tectonic shift more than India
with its tech talent. NVIDIA is a world leader in artificial intelligence
technologies and is doing significant work to train the next generation of deep
learning practitioners. Earlier this year we announced our plans to train
100,000 developers in FY18 in deep-learning skills. We are working across
academia and the startup community to conduct trainings in deep learning. I’m
keen to understand the enthusiasm of the attendees in these areas and how
NVIDIA can provide a bigger platform and bring the AI researchers and
scientists community together.
Thank you Vishal!
Join us on Day 1 (Sep 14) of DVCon India 2017 at Leela Palace, Bangalore to attend this keynote and other exciting topics!
Disclaimer: "The postings on this blog are my own and not necessarily reflect the view of Aricent"