Posts

A digital twin is a digital representation of a physical object or system. Cognitive digital twin (CDT) technology uses AI to create highly sophisticated “twins” or models to accurately mimic a real-world object. That object could be a car, an airplane, ship, or helicopter. Increasingly in the healthcare field, CDT has been used to “twin” organs and biological systems for research, diagnostics, and health maintenance. Digital twins have even been built to represent and understand regions and cities.

Now, in perhaps one of the most complex and ambitious uses of CDT to date, the National Oceanographic and Atmospheric Association (NOAA) has announced its plans to create a digital twin of the planet Earth to track global warming and other environmental issues!

The agency has partnered with NVIDIA and Lockheed Martin to construct the Earth Observation Digital Twin, an inaugural prototype of Earth modeled on real-time geophysical data sourced from satellites and ground stations.

According to NOAA, the replica Earth, or EODT, will be designed as a two-dimensional computer program. Some potential climate impacts the EODT can display include global glacier melting, drought impacts, wildfire prediction, and other climate change events.

“We’re providing a one-stop shop for researchers, and for next-generation systems, not only for current, but for recent past environmental data,” Lockheed Martin Space Senior Research Scientist Lynn Montgomery said. “Our collaboration with NVIDIA will provide NOAA a timely, global visualization of their massive datasets.”

Emerging technologies like artificial intelligence play a key role in EODT’s data processing and modeling. Matt Ross, a senior manager at Lockheed Martin, said that the sheer volume and diversity of data from NOAA sources that program EODT make it challenging to gauge accurate insights from the application without the use of AI.

“This data happens to come in different formats, because the data are so diverse, because it’s measuring so much different stuff,” Ross said. “It arrives in different formats that, absent technology, it could make it very, very difficult to gain the insights that NOAA needs to make decisions.”

Leveraging the power of AI and machine learning algorithms will help NOAA researchers assimilate and identify the incoming data, as well as detect any anomalies. Ross added that the combined power of AI and ML data processing is key to Lockheed and NVIDIA’s “digital twin” programming technology in that it can accurately model past data as well as future realities, all in an intractable and real-time interface.

While both NVIDIA and Lockheed intend for the final deliverable to be a two-dimensional user experience, additional capabilities may be added in the future.

“The fact that we can pull all this data into a single sort of format, in a single viewpoint, allows you to have real-time or near real time, access to it, and the interdependencies of that data to make real-time decisions,” said Dion Harris, the lead product manager of accelerated computing at NVIDIA.

Other Industries Benefiting from Digital Twin Technologies

In addition to paving the way for a digital twin of the planet itself to monitor climate change, AI and digital twinning are revolutionizing many other industries, chief among them transportation. Just as CDT can monitor the health of the Earth, cognitive digital twin technologies are proving invaluable for predictive maintenance of high-value military vehicles, airplanes, ships, and even passenger cars. Digital twin solutions like those developed by CarTwin extend the lifespan of cars and other vehicles by monitoring the vehicle’s “health” through its “digital twin.”

Basically, CarTwin can provide diagnostic and predictive models for all vehicle systems for which data is available (either directly or indirectly) onboard the vehicle.

Virtually any part of the vehicle that has sensors or that sensors can be developed for can be “twinned.” These data sets are then enhanced and augmented with design and manufacturing data that is already available by the OEM.

Primarily designed for use in fleets of vehicles, in combination with powerful AI models, CarTwin predicts breakdowns, monitors and improves performance, measures and records real-time greenhouse gas emissions, which reduces expensive maintenance costs and avoids lost revenue associated with fleet downtime.

Rohit Mahajan is a Managing Partner at BigRio and the President and Co-Founder of Citadel Discovery. He has a particular expertise in the development and design of innovative AI and machine learning solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

CarTwin has leveraged AI and Digital Twin technologies to create a digital, cloud-based clone of a physical vehicle designed to detect, prevent, predict, and optimize through AI and real-time analytics. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

One of the country’s leading comprehensive cancer centers has just announced that it is tapping an artificial intelligence-powered drug discovery platform to aid its development of novel cancer therapeutics.

The center is working with AI developer Exscientia to aid its discovery of new cancer drugs. According to an MD Anderson press release, the collaboration will start with “jointly identified oncology targets and then employ Exscientia’s AI platform to design small-molecule drugs.” The resulting candidates will be examined by MD Anderson’s Therapeutics Discovery division and its Institute for Applied Cancer Science, and the most promising prospects will potentially advance into clinical proof-of-concept studies at the Houston cancer center.

MD Anderson’s drug discovery institute, known as IACS, and the cancer center’s other teams have to date helped graduate at least five small-molecule and antibody-based therapies into early-stage clinical testing, including through collaborations with Bristol Myers Squibb, Ionis, Astellas and more.

The financial terms of the joint venture were not disclosed; however, in their announcement, Exscientia and MD Anderson said they will “jointly contribute to and support each program” that is targeted for development.

Exscientia, has been a leader in AI-driven design of large-molecule drugs and antibody therapies. In addition to partnering with facilities such as MD Anderson and well-known pharmaceutical companies, earlier this year, Exscientia found itself with the rights to develop a drug of its own. After wrapping up an AI collaboration with Bayer to develop targets in cancer and cardiovascular disease, the two companies announced that Exscientia would retain the option to develop one of the two targets.

Citadel and AI for Drug Discovery

Similar to the partnership between Exscientia and MD Anderson, Citadel Discovery is sharing knowledge and expertise to better enable drug discovery by providing access to data, models, and results discounted for academics and by developing a sharing platform and an expanded list of drug targets.

Citadel was launched in 2021 with the purpose of giving a kind of “open access” to the data and technology that will drive the future of pharma research streamlining and lowering the costs of drug discovery and biological research.

The costs of drug discovery continue to rise, with current estimates exceeding $2 Billion. Not to mention that bringing a drug successfully through all clinical trial phases takes, on average, 10-12 years in research and development. Artificial intelligence and machine learning in drug discovery hold the key to reducing these costs and timelines.

Rohit Mahajan is a Managing Partner at BigRio and the President and Co-Founder of Citadel Discovery. He has a particular expertise in the development and design of innovative AI and machine learning solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

CarTwin has leveraged AI and Digital Twin technologies to create a digital, cloud-based clone of a physical vehicle designed to detect, prevent, predict, and optimize through AI and real-time analytics. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

As AI continues to advance in this digital age, in what seems the ever-increasing conflict between man and machine, the one bastion where “man” held out hope of dominance was the arts. However, even that may just be wishful thinking, as recently, there has been an increasing amount of “art” in artificial intelligence!

Some interesting and perhaps somewhat disturbing cases, in point, have been the introduction of “text-to-image” AI tools such as Midjourney and Stable Diffusion.

These programs have become wildly popular, with their remarkable ability to take language prompts from human users and translate them into original images. Another such platform, Dall-E, is now producing 2 million images a day, including some uncannily realistic creations as well as some surreal abstracts inspired by human’s stated feelings.

What does this mean for the future of AI, AI startup opportunities, and, more importantly, to human artists and the one field that they had hoped would hold the line between man and machine?

When you use a text-to-image AI tool like Midjourney, you type in a phrase that describes what you want — for example, “A father feeling grief in the style of Van Gogh.”

In less than a minute, Midjourney produces four original images it thinks may match the prompt. You can then pick the image that you like best, create new variations based on that image, and refine them from there.

The private sector is already starting to realize the program’s potential, said David Holz, the founder of Midjourney, speaking to Marketplace.

“Business owners, game designers, people in the movie industry are using it,” he said.

About 30% of Midjourney’s users are professionals who use it primarily to brainstorm for commercial projects, Holz said, adding that tech like Midjourney will change how artists work.

But smart employers won’t use it to replace them. At least, that is the hope among the creative community.

“Some people will see this as an opportunity to cut costs and have the same quality,” Holtz said. “They will fail,” he added.

Artists who are using these kinds of programs do not feel they are “cheating” any more than an architect or engineer who uses CAD cam. They are still channeling their creativity; they are just using an advanced tool to do so.

While purists remain and have been raising alarms on social media about AI replacing human creativity, those artists that have embraced the technology say they are still creating “art” it is simply art that may belong in a very different category, just like computer graphic art differs from oil painting, but they are both undeniable art, created by artists.

What do you think?

Rohit Mahajan is a Managing Partner at BigRio and the President and Co-Founder of Citadel Discovery. He has a particular expertise in the development and design of innovative AI and machine learning solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

CarTwin has leveraged AI and Digital Twin technologies to create a digital, cloud-based clone of a physical vehicle designed to detect, prevent, predict, and optimize through AI and real-time analytics. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

The automotive sector has been one of the hardest industries hit by and since the pandemic. Rampant inflation, a chip shortage, and other global economic factors have new car demand low and prices high. Which all means that automotive consumers are choosing to keep their older cars longer.

AI is helping car owners do exactly that by providing solutions that can help drivers and repair shops diagnose current problems on their cars and anticipate future repair and service needs – which can all extend the lifespan of their vehicles.

However, whereas most of the AI-driven auto diagnostic solutions that are becoming available are “apps” that are designed to be used by the individual consumer or repair shop, CarTwin is a machine learning/digital twin technology that has been designed for fleet operators and auto manufacturers by allowing the use of AI for predictive maintenance on a much grander scale.

Having proved itself in the field with a well-known German manufacturer of high-performance vehicles, CarTwin now serves the shifting needs of the auto industry with an AI-driven solution that enables a diverse set of automotive use cases. CarTwin creates unique innovations and unique opportunities by connecting the physical and digital worlds, which can provide real-time operational awareness of vehicle, component, and manufacturing performance.

“The automotive industry has witnessed an incredible transformation over the last decade. CarTwin represents the next chapter in its digital disruption. We’re leveraging artificial intelligence, industry expertise, and easy-to-use tools to provide the most complete Digital Twin technology blueprint available today,” says Corey Thompson, CEO of CarTwin.

Basically, CarTwin can provide diagnostic and predictive models for all vehicle systems for which data is available (either directly or indirectly) onboard the vehicle.

“As an example, for the project with our OEM, we have already built models for the suspension and battery systems and are continuing to add additional systems as we move along. Our POC project will add the ignition system, fuel system, and turbocharging system,” says Thompson.

Virtually any part of the vehicle that has sensors or that sensors can be developed for can be “twinned.” These data sets will be augmented with design and manufacturing data that is already available by the OEM.

CarTwin obtains its data from the “CAN Bus,” which is basically the “communication network” on a vehicle that enables data acquisition also in real-time. CarTwin utilizes the data from the CAN Bus, as well as historical data about the inspection, repair, and parts replacement from the service centers, to build our models. This means that it can be used in any vehicle that is newer than 1996 when the CAN Bus started to be used.

The platform records and utilizes the data streaming through fleet vehicles. In combination with powerful AI models CarTwin predicts breakdowns, monitors and improves performance, measures and records real time greenhouse gas emissions, reduces expensive maintenance costs and avoids lost revenue associated with fleet downtime. Insights from CarTwin help reduce the carbon footprint and it benefits corporate ESG (Environmental, Social and Governance) objectives

“Most importantly, says Thompson, “our solutions require little to no infrastructure improvements for our customers to experience significant competitive advantages, we have devices for purchase that simply plug into the vehicle OBD (onboard computer) ports.”

“Additionally, CarTwin provides carbon footprint intensity and reporting to meet corporate ESG objectives, Thompson adds. Our CarbonLess tool can identify how and where you can save fuel. And fleet operators can use CarbonLess to monitor and identify anomalous CO2 and NOx emissions.

Rohit Mahajan is a Managing Partner at BigRio and the President and Co-Founder of Citadel Discovery. He has a particular expertise in the development and design of innovative AI and machine learning solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

CarTwin has leveraged AI and Digital Twin technologies to create a digital, cloud-based clone of a physical vehicle designed to detect, prevent, predict, and optimize through AI and real-time analytics. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

Artificial Intelligence has been promising to revolutionize healthcare and several other industries for a long time — and by all evidence, the revolution, while still in its infancy, has arrived. If you are not sure, just ask Siri!

Seriously, AI and machine learning are already having major impacts across many industries, not the least of which is healthcare, where AI is already radically shifting the way healthcare is delivered, streamlining operations, improving diagnostics, and improving outcomes.

However, not surprisingly, the growing ubiquity of AI is raising some concerns about privacy and other issues in the way that users interact with AI algorithms. So much so that the Biden administration has recently launched a blueprint for an “AI Bill of Rights” to help ensure the ethical use of AI. Not surprisingly, it is modeled after the sort of patient “bills of rights” people have come to expect as they interact with doctors, hospitals, and other healthcare professionals.

The blueprint outlines five basic principles:
1. Safe and effective systems
2. Algorithmic bias protections
3. Data privacy
4. Transparency and explanation
5. Human alternatives and backups

These basic principles are meant to provide a framework or “guidance” for the US government, tech companies, researchers, and other stakeholders, but for now, it is important to point out that the blueprint represents nonbinding recommendations and does not constitute regulatory policy.

Such regulations will undoubtedly be taken up by Congress in the not-too-distant future. But for now, the guidelines have been designed to apply to AI and automated tools across industries, including healthcare, and are meant to begin the dialogue around a much-needed larger conversation on the ethical use of AI.

The core concept behind the five guiding principles is for Americans to feel safer as they increasingly interact with AI, particularly in the healthcare setting. With such guidelines in place they can feel confident that they are being shielded from harmful or ineffective systems; that they will not face bias or inequities caused by AI; and they will be protected from abusive data practices via built-in safeguards and transparencies over how their data is used.

The fifth principle is an important one. It ensures that Americans receive a complete understanding of how the AI works and is being used and offers the option to “opt-out” and interact with a human interface instead, where practical and appropriate.

“Considered together, the five principles and associated practices of the Blueprint for an AI Bill of Rights form an overlapping set of backstops against potential harms,” the document concludes. “This purposefully overlapping framework, when taken as a whole, forms a blueprint to help protect the public from harm. The measures taken to realize the vision set forward in this framework should be proportionate with the extent and nature of the harm, or risk of harm, to people’s rights, opportunities, and access.”

Rohit Mahajan is a Managing Partner with BigRio. He has a particular expertise in the development and design of innovative solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

BigRio is a technology consulting firm empowering data to drive innovation, and advanced AI. We specialize in cutting-edge Big Data, Machine Learning, and Custom Software strategy, analysis, architecture, and implementation solutions. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

October is the month for all things spooky, so it’s no wonder that The 2022 Nobel Prize in Physics has been awarded to three scientists whose work provided the groundbreaking “spooky at a distance” relationship in quantum mechanics.

While it is an odd phenomenon, there is nothing “supernatural” about the discovery of “spooky at a distance” behavior of particles. The theory, as proven by the work of the scientists receiving the awards, refers to the way that particles once “bound” together at the quantum level, will still behave as if they were bound, even when they are separated over great distances.

John F. Clauser, Alain Aspect, and Anton Zeilinger won the 10 million Swedish krona ($915,000) prize for “experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science,” the Royal Swedish Academy of Sciences announced on Tuesday Oct. 4.

Albert Einstein who was aware of the phenomenon dubbed it “spooky action at a distance.”

This concept of “quantum entanglement” paves the way for such theoretical applications as teleportation and forms the very real basis for quantum computing. It is this “spooky” behavior of entangled particles that makes quantum computers orders of magnitude more powerful than even the most powerful supercomputers in use today.

A quantum computer in used by Google is said to be 100 million times faster than any of today’s systems.

What is a Quantum Computer?
The main difference between quantum computers and conventional computers is that they do not store and process information in the bits and bytes that we are familiar with, but rather in something else entirely, known as quantum bits, or “qubits.”

All conventional computing comes down to streams of electrical or optical pulses representing 1s or 0s. Everything from your tweets and e-mails to your iTunes songs and YouTube videos are essentially long strings of these binary digits.

Qubits, on the other hand, are typically made up of subatomic particles such as electrons or photons. The very same photons that were involved in the trio of scientists Nobel Prize winning experiments.

Qubits leverage quantum entanglement,” something that Einstein himself called “spooky action at a distance.”

A simple way of understanding “entanglement” is it is an interdependence based on a long an intimate relationship between the two particles, like a child who goes away to college across the country, but still is “dependent” on the support of his or her parents.

In quantum computing, entanglement is what accounts for the nearly incomprehensible processing power and memory of quantum computers. In a conventional computer, bits and processing power are in a 1:1 relationship – if you double the bits, you get double the processing power, but thanks to entanglement, adding extra qubits to a quantum machine produces an exponential increase in its calculation ability.

Quantum computing is still very much an emerging technology with large scale and practical applications still a way off. However, the technology is steadily graduating from the lab and heading for the marketplace. In 2019, Google announced that it had achieved “quantum supremacy,” IBM has committed to doubling the power of its quantum computers every year, and numerous other companies and academic institutions are investing billions toward making quantum computing a commercial reality.

Quantum computing will take artificial intelligence and machine learning to the next level. The marriage between the two is an area to pay very close attention to for startups as well as for where Big Tech will be going over the next five to ten years.

Zeilinger, 77, professor emeritus at the University of Vienna, said during a press conference about the award, “It is quite clear that in the near future we will have quantum communication all over the world.”

Kudos to the Royal Swedish Academy for recognizing the groundbreaking work of the gentlemen that have literally opened the door into another world and the unbridled potential of artificial intelligence and information technology.

Rohit Mahajan is a Managing Partner with BigRio. He has a particular expertise in the development and design of innovative solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

BigRio is a technology consulting firm empowering data to drive innovation, and advanced AI. We specialize in cutting-edge Big Data, Machine Learning, and Custom Software strategy, analysis, architecture, and implementation solutions. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

Artificial Intelligence has been promising to revolutionize healthcare for quite some time; however, one look at any modern hospital or healthcare facility, and it is easy to see that the revolution is already here.

In almost every patient touchpoint AI is already having an enormous impact on changing the way healthcare is delivered, streamlining operations, improving diagnostics, and improving outcomes.

Although the deployment of AI in the healthcare sector is still in its infancy, it is becoming a much more common sight. According to technology consulting firm, Gartner, healthcare IT spending for 2021 was a hefty $140 billion worldwide, with enterprises listing “AI and robotic process automation (RPA)” as their lead spending priorities.

Here, in no particular order or importance, are seven of the top areas where healthcare AI solutions are being developed and currently deployed.

1. Operations and Administration
A hospital’s operation and administration expenses can be a major drain on the healthcare system. AI is already providing tools and solutions that are designed to improve and streamline administration. Such AI algorithms are proving to be invaluable for insurers, payers, and providers alike. Specifically, there are several AI programs and AI healthcare startups that are dedicated to finding and eliminating fraud. It has been estimated that healthcare fraud costs insurers anywhere between $70 billion and $234 billion each year, harming both patients and taxpayers.

2. Medical Research
Probably one of the most promising areas where AI is making a major difference in healthcare is in medical research. AI tools and software solutions are making an astounding impact on streamlining every aspect of medical research, from improved screening of candidates for clinical trials, to targeted molecules in drug discovery, to the development of “organs on a chip” – AI combined with the power of ever-improving Natural Language Processing (NLP) is changing the very nature of medical research for the better.

3. Predictive Outcomes and Resource Allocation
AI is being used in hospital settings to better predict patient outcomes and more efficiently allocate resources. This proved extraordinarily helpful during the peak of the pandemic when facilities were able to use AI algorithms to predict upon admission to the ER, which patients would most benefit from ventilators, which were in very short supply. Similarly, a Stanford University pilot project is using AI algorithms to determine which patients are at high risk of requiring ICU care within an 18 to 24 hours period.

4. Diagnostics
AI applications in diagnostics, particularly in the field of medical imaging, are extraordinary. AI can “see” details in MRIs and other medical images far greater than the human eye and, when tied into the enormous volume of medical image databases, can make far more accurate diagnoses of conditions such as breast cancer, eye disease, heart, and lung disease and so much more. AI can look at vast numbers of medical images and then identify patterns in seconds that would take human technicians hours or days to do. AI can also detect minor variations that humans simply could not find, no matter how much time they had. This not only improves patient outcomes but also saves money. For example, studies have found that earlier diagnosis and treatment of most cancers can cut treatment costs by more than 50%.

5. Training
AI is allowing medical students and doctors “hands-on training” via virtual surgeries and other procedures that can provide real-time feedback on success and failure. Such AI-based training programs allow students to learn techniques in safe environments and receive immediate critique on their performance before they get anywhere near a patient. One study found that med students learned skills 2.6 times faster and performed 36% better than those not taught with AI.

6. Telemedicine
Telemedicine has revolutionized patient care, particularly since the pandemic, and now AI is taking remote medicine to a whole new level where patients can tie AI-driven diagnostic tools through their smartphones and provide remote images and monitoring of changes in detectable skin cancers, eye conditions, dental conditions and more. AI programs are also being used to remotely monitor heart patients, diabetes patients, and others with chronic conditions and help to ensure they are complying with taking their medications.

7. Direct treatment
In addition to adding better clinical outcomes with improved diagnostics and resource allocation, AI is already making a huge difference in the direct delivery of treatments. One exciting and extremely profound example of this is robotic/AI-driven surgical procedures. Minimally invasive and non-invasive AI-guided surgical procedures are already becoming quite common. Soon, all but some of the most major surgeries, such as open heart surgeries, can and will be done as minimally invasive procedures, and even the most complex “open procedures” will be made safer, more accurate, and more efficient thanks to surgical AI and digital twins of major organs such as lungs and the heart.

Rohit Mahajan is a Managing Partner with BigRio. He has a particular expertise in the development and design of innovative solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.
BigRio is a technology consulting firm empowering data to drive innovation, and advanced AI. We specialize in cutting-edge Big Data, Machine Learning, and Custom Software strategy, analysis, architecture, and implementation solutions. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

Sometimes I get to thinking that Alexa isn’t really my friend. I mean sure, she’s always polite enough (well, usually, but it’s normal for friends to fight, right?). But she sure seems chummy with that pickle-head down the hall too. I just don’t see how she can connect with us both — we’re totally different!

So that’s the state of the art of conversational AI: a common shared agent that represents an organization. A spokesman. I guess she’s doing her job, but she’s not really representing me or M. Pickle, and she can’t connect with either of us as well as she might if she didn’t have to cater to both of us at the same time. I’m exaggerating a little bit – there are some personalization techniques (*cough* crude hacks *cough*) in place to help provide a custom experience:

  • There is a marketplace of skills. Recently, I can even ask her to install one for me.
  • I have a user profile. She knows my name and zip code.
  • Through her marketplace, she can access my account and run my purchase through a recommendation engine (the better to sell you with, my dear!)
  • I changed her name to “Echo” because who has time for a third syllable? (If only I were hamming this up for the post; sadly, a true story)
  • And if I may digress to my other good friend Siri, she speaks British to me now because duh.

It’s a start but, if we’re honest, none of these change the agent’s personality or capabilities to fit with all of my quirks, moods, and ever-changing context and situation. Ok, then. What’s on my wishlist?

  • I want my own agent with its own understanding of me, able to communicate and serve as an extension of myself.
  • I want it to learn everything about how I speak. That I occasionally slip into a Western accent and say “ruf” instead of “roof”. That I throw around a lot of software dev jargon; Python is neither a trip to the zoo nor dinner (well, once, and it wasn’t bad. A little chewy.) That Pickle Head means my colleague S… nevermind. You get the idea.
  • I want my agent to extract necessary information from me in a way that fits my mood and situation. Am I running late for a life-changing meeting on a busy street uphill in a snowstorm? Maybe I’m just goofing around at home on a Saturday.
  • I want my agent to learn from me. It doesn’t have to know how to do everything on this list out of the box – that would be pretty creepy – but as it gets to know me it should be able to pick up on my cues, not to mention direct instructions.

Great, sign me up! So how do I get one? The key is to embrace training (as opposed to coding, crafting, and other manual activities). As long as there is a human in the loop, it is simply impossible to scale an agent platform to this level of personalization. There would be a separate and ongoing development project for every single end user… great job security for developers, but it would have to sell an awful lot of stuff.

To embrace training, we need to dissect what goes into training. Let’s over-simplify the “brain” of a conversational AI for a moment: we have NLU (natural language understanding), DM (dialogue management), and NLG (natural language generation). Want an automatically-produced agent? You have to automate all three of these components.

  • NLU – As of this writing, this is the most advanced component of the three. Today’s products often do incorporate at least some training automation, and that’s been a primary enabler that leads to the assistants that we have now. Improvements will need to include individualized NLU models that continually learn from each user, and the addition of (custom, rapid) language models that can expand upon the normal and ubiquitous day-to-day vocabulary to include trade-specific, hobby-specific, or even made-up terms. Yes, I want Alexa to speak my daughter’s imaginary language with her.
  • DM – Sorry developers, if we make plugin skills ala Mobile Apps 2.0 then we aren’t going to get anywhere. Dialogues are just too complex, and rules and logic are just too brittle. This cannot be a programming exercise. Agents must learn to establish goals and reason about using conversation to achieve those goals in an automated fashion.
  • NLG – Sorry marketing folks, there isn’t brilliant copy for you to write. The agent needs the flexibility to communicate to the user in the most effective way, and it can’t do that if it’s shackled by canned phrases that “reflect the brand”.

In my experience, most current offerings are focusing on the NLU component – and that’s awesome! But to realize the potential of MicroAgents (yeah, that’s right. MicroAgents. You heard it here first) we need to automate the entire agent, which is easier said than done. But that’s not to say that it’s not going to happen anytime soon – in fact, it might happen sooner than you think.  

Echo, I’m done writing. Post this sucker.

Doh!


 

In the 2011 Jeopardy! face-off between IBM’s Watson and Jeopardy! champions Ken Jennings and Brad Rutter, Jennings acknowledged his brutal takedown by Watson during the last double jeopardy in stating “I for one welcome our new computer overlords.” This display of computer “intelligence” sparked mass amounts of conversation amongst myriad groups of people, many of whom became concerned at what they perceived as Watson’s ability to think like a human. But, as BigR.io’s Director of Business Development Andy Horvitz points out in his blog “Watson’s Reckoning,” even the Artificial Intelligence technology with which Watson was produced is now obsolete.

The thing is, while Watson was once considered to be the cutting-edge technology of Artificial Intelligence, Artificial Intelligence itself isn’t even cutting-edge anymore. Now, before you start lecturing me about how AI is cutting-edge, let me explain.

Defining Artificial Intelligence

You see, as Bernard Marr points out, Artificial Intelligence is the overarching term for machines having the ability to carry out human tasks. In this regard, modern AI as we know it has already been around for decades – since the 1950s at least (especially thanks to the influence of Alan Turing). Moreso, some form of the concept of artificial intelligence dates back to ancient Greece when philosophers started describing human thought processes as a symbolic system. It’s not a new concept, and it’s a goal that scientists have been working towards for as long as there have been machines.

The problem is that the term “artificial intelligence” has become a colloquial term applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving.” But the thing is, AI isn’t necessarily synonymous with “human thought capable machines.” Any machine that can complete a task in a similar way that a human might can be considered AI. And in that regard, AI really isn’t cutting-edge.

What is cutting-edge are the modern approaches to Machine Learning, which have become the cusp of “human-like” AI technology (like Deep Learning, but that’s for another blog).

Though many people (scientists and common folk alike) use the terms AI and Machine Learning interchangeably, Machine Learning actually has the narrower focus of using the core ideas of AI to help solve real-world problems. For example, while Watson can perform the seemingly human task of critically processing and answering questions (AI), it lacks the ability to use these answers in a way that’s pragmatic to solve real-world problems, like synthesizing queried information to find a cure for cancer (Machine Learning).

Additionally, as I’m sure you already know, Machine Learning is based upon the premise that these machines train themselves with data rather than by being programmed, which is not necessarily a requirement of Artificial Intelligence overall.

https://xkcd.com/1838/

Why Know the Difference?

So why is it important to know the distinction between Artificial Intelligence and Machine Learning? Well, in many ways, it’s not as important now as it might be in the future. Since the two terms are used so interchangeably and Machine Learning is seen as the technology driving AI, hardly anyone would correct you if were you to use them incorrectly. But, as technology is progressing ever faster, it’s good practice to know some distinction between these terms for your personal and professional gains.

Artificial Intelligence, while a hot topic, is not yet widespread – but it might be someday. For now, when you want to inquire about AI for your business (or personal use), you probably mean Machine Learning instead. By the way, did you know we can help you with that? Find out more here.

We’re seeing and doing all sorts of interesting work in the Image domain. Recent blog posts, white papers, and roundtables capture some of this work, such as image segmentation and classification to video highlights. But an Image area of broad interest that, to this point, we’ve but scratched the surface of is Video-based Anomaly Detection. It’s a challenging data science problem, in part due to the velocity of data streams and missing data, but has wide-ranging solution applicability.

In-store monitoring of customer movements and behavior.

Motion sensing, the antecedent to Video-based Anomaly Detection, isn’t new and there are a multitude of commercial solutions in that area. Anomaly Detection is something different and it opens the door to new, more advanced applications and more robust deployments. Part of the distinction between the two stems from “sensing” what’s usual behavior and what’s different.

Anomaly Detection

Walkers in the park look “normal”. The bicyclist is the anomaly. 

Anomaly detection requires the ability to understand a motion “baseline” and to trigger notifications based on deviations from that baseline. Having this ability offers the opportunity to deploy AI-monitored cameras in many more real-world situations across a wide range of security use cases, smart city monitoring, and more, wherein movements and behaviors can be tracked and measured with higher accuracy and at a much larger scale than ever before.

With 500 million video cameras in the world tracking these movements, a new approach is required to deal with this mountain of data. For this reason, Deep Learning and advances in edge computing are enabling a paradigm shift from video recording and human watchers toward AI monitoring. Many systems will have humans “in the loop,” with people being alerted to anomalies. But others won’t. For example, in the near future, smart cities will automatically respond to heavy traffic conditions with adjustments to the timing of stoplights, and they’ll do so routinely without human intervention.

Human in the Loop

Human in the loop.

As on many AI fronts, this is an exciting time and the opportunities are numerous. Stay tuned for more from BigR.io, and let’s talk about your ideas on Video-based Anomaly Detection or AI more broadly.