Tag Archive for: Artificial Intelligence

Artificial Intelligence has been promising to revolutionize healthcare and several other industries for a long time — and by all evidence, the revolution, while still in its infancy, has arrived. If you are not sure, just ask Siri!

Seriously, AI and machine learning are already having major impacts across many industries, not the least of which is healthcare, where AI is already radically shifting the way healthcare is delivered, streamlining operations, improving diagnostics, and improving outcomes.

However, not surprisingly, the growing ubiquity of AI is raising some concerns about privacy and other issues in the way that users interact with AI algorithms. So much so that the Biden administration has recently launched a blueprint for an “AI Bill of Rights” to help ensure the ethical use of AI. Not surprisingly, it is modeled after the sort of patient “bills of rights” people have come to expect as they interact with doctors, hospitals, and other healthcare professionals.

The blueprint outlines five basic principles:
1. Safe and effective systems
2. Algorithmic bias protections
3. Data privacy
4. Transparency and explanation
5. Human alternatives and backups

These basic principles are meant to provide a framework or “guidance” for the US government, tech companies, researchers, and other stakeholders, but for now, it is important to point out that the blueprint represents nonbinding recommendations and does not constitute regulatory policy.

Such regulations will undoubtedly be taken up by Congress in the not-too-distant future. But for now, the guidelines have been designed to apply to AI and automated tools across industries, including healthcare, and are meant to begin the dialogue around a much-needed larger conversation on the ethical use of AI.

The core concept behind the five guiding principles is for Americans to feel safer as they increasingly interact with AI, particularly in the healthcare setting. With such guidelines in place they can feel confident that they are being shielded from harmful or ineffective systems; that they will not face bias or inequities caused by AI; and they will be protected from abusive data practices via built-in safeguards and transparencies over how their data is used.

The fifth principle is an important one. It ensures that Americans receive a complete understanding of how the AI works and is being used and offers the option to “opt-out” and interact with a human interface instead, where practical and appropriate.

“Considered together, the five principles and associated practices of the Blueprint for an AI Bill of Rights form an overlapping set of backstops against potential harms,” the document concludes. “This purposefully overlapping framework, when taken as a whole, forms a blueprint to help protect the public from harm. The measures taken to realize the vision set forward in this framework should be proportionate with the extent and nature of the harm, or risk of harm, to people’s rights, opportunities, and access.”

Rohit Mahajan is a Managing Partner with BigRio. He has a particular expertise in the development and design of innovative solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

BigRio is a technology consulting firm empowering data to drive innovation, and advanced AI. We specialize in cutting-edge Big Data, Machine Learning, and Custom Software strategy, analysis, architecture, and implementation solutions. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

October is the month for all things spooky, so it’s no wonder that The 2022 Nobel Prize in Physics has been awarded to three scientists whose work provided the groundbreaking “spooky at a distance” relationship in quantum mechanics.

While it is an odd phenomenon, there is nothing “supernatural” about the discovery of “spooky at a distance” behavior of particles. The theory, as proven by the work of the scientists receiving the awards, refers to the way that particles once “bound” together at the quantum level, will still behave as if they were bound, even when they are separated over great distances.

John F. Clauser, Alain Aspect, and Anton Zeilinger won the 10 million Swedish krona ($915,000) prize for “experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science,” the Royal Swedish Academy of Sciences announced on Tuesday Oct. 4.

Albert Einstein who was aware of the phenomenon dubbed it “spooky action at a distance.”

This concept of “quantum entanglement” paves the way for such theoretical applications as teleportation and forms the very real basis for quantum computing. It is this “spooky” behavior of entangled particles that makes quantum computers orders of magnitude more powerful than even the most powerful supercomputers in use today.

A quantum computer in used by Google is said to be 100 million times faster than any of today’s systems.

What is a Quantum Computer?
The main difference between quantum computers and conventional computers is that they do not store and process information in the bits and bytes that we are familiar with, but rather in something else entirely, known as quantum bits, or “qubits.”

All conventional computing comes down to streams of electrical or optical pulses representing 1s or 0s. Everything from your tweets and e-mails to your iTunes songs and YouTube videos are essentially long strings of these binary digits.

Qubits, on the other hand, are typically made up of subatomic particles such as electrons or photons. The very same photons that were involved in the trio of scientists Nobel Prize winning experiments.

Qubits leverage quantum entanglement,” something that Einstein himself called “spooky action at a distance.”

A simple way of understanding “entanglement” is it is an interdependence based on a long an intimate relationship between the two particles, like a child who goes away to college across the country, but still is “dependent” on the support of his or her parents.

In quantum computing, entanglement is what accounts for the nearly incomprehensible processing power and memory of quantum computers. In a conventional computer, bits and processing power are in a 1:1 relationship – if you double the bits, you get double the processing power, but thanks to entanglement, adding extra qubits to a quantum machine produces an exponential increase in its calculation ability.

Quantum computing is still very much an emerging technology with large scale and practical applications still a way off. However, the technology is steadily graduating from the lab and heading for the marketplace. In 2019, Google announced that it had achieved “quantum supremacy,” IBM has committed to doubling the power of its quantum computers every year, and numerous other companies and academic institutions are investing billions toward making quantum computing a commercial reality.

Quantum computing will take artificial intelligence and machine learning to the next level. The marriage between the two is an area to pay very close attention to for startups as well as for where Big Tech will be going over the next five to ten years.

Zeilinger, 77, professor emeritus at the University of Vienna, said during a press conference about the award, “It is quite clear that in the near future we will have quantum communication all over the world.”

Kudos to the Royal Swedish Academy for recognizing the groundbreaking work of the gentlemen that have literally opened the door into another world and the unbridled potential of artificial intelligence and information technology.

Rohit Mahajan is a Managing Partner with BigRio. He has a particular expertise in the development and design of innovative solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.

BigRio is a technology consulting firm empowering data to drive innovation, and advanced AI. We specialize in cutting-edge Big Data, Machine Learning, and Custom Software strategy, analysis, architecture, and implementation solutions. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

Artificial Intelligence has been promising to revolutionize healthcare for quite some time; however, one look at any modern hospital or healthcare facility, and it is easy to see that the revolution is already here.

In almost every patient touchpoint AI is already having an enormous impact on changing the way healthcare is delivered, streamlining operations, improving diagnostics, and improving outcomes.

Although the deployment of AI in the healthcare sector is still in its infancy, it is becoming a much more common sight. According to technology consulting firm, Gartner, healthcare IT spending for 2021 was a hefty $140 billion worldwide, with enterprises listing “AI and robotic process automation (RPA)” as their lead spending priorities.

Here, in no particular order or importance, are seven of the top areas where healthcare AI solutions are being developed and currently deployed.

1. Operations and Administration
A hospital’s operation and administration expenses can be a major drain on the healthcare system. AI is already providing tools and solutions that are designed to improve and streamline administration. Such AI algorithms are proving to be invaluable for insurers, payers, and providers alike. Specifically, there are several AI programs and AI healthcare startups that are dedicated to finding and eliminating fraud. It has been estimated that healthcare fraud costs insurers anywhere between $70 billion and $234 billion each year, harming both patients and taxpayers.

2. Medical Research
Probably one of the most promising areas where AI is making a major difference in healthcare is in medical research. AI tools and software solutions are making an astounding impact on streamlining every aspect of medical research, from improved screening of candidates for clinical trials, to targeted molecules in drug discovery, to the development of “organs on a chip” – AI combined with the power of ever-improving Natural Language Processing (NLP) is changing the very nature of medical research for the better.

3. Predictive Outcomes and Resource Allocation
AI is being used in hospital settings to better predict patient outcomes and more efficiently allocate resources. This proved extraordinarily helpful during the peak of the pandemic when facilities were able to use AI algorithms to predict upon admission to the ER, which patients would most benefit from ventilators, which were in very short supply. Similarly, a Stanford University pilot project is using AI algorithms to determine which patients are at high risk of requiring ICU care within an 18 to 24 hours period.

4. Diagnostics
AI applications in diagnostics, particularly in the field of medical imaging, are extraordinary. AI can “see” details in MRIs and other medical images far greater than the human eye and, when tied into the enormous volume of medical image databases, can make far more accurate diagnoses of conditions such as breast cancer, eye disease, heart, and lung disease and so much more. AI can look at vast numbers of medical images and then identify patterns in seconds that would take human technicians hours or days to do. AI can also detect minor variations that humans simply could not find, no matter how much time they had. This not only improves patient outcomes but also saves money. For example, studies have found that earlier diagnosis and treatment of most cancers can cut treatment costs by more than 50%.

5. Training
AI is allowing medical students and doctors “hands-on training” via virtual surgeries and other procedures that can provide real-time feedback on success and failure. Such AI-based training programs allow students to learn techniques in safe environments and receive immediate critique on their performance before they get anywhere near a patient. One study found that med students learned skills 2.6 times faster and performed 36% better than those not taught with AI.

6. Telemedicine
Telemedicine has revolutionized patient care, particularly since the pandemic, and now AI is taking remote medicine to a whole new level where patients can tie AI-driven diagnostic tools through their smartphones and provide remote images and monitoring of changes in detectable skin cancers, eye conditions, dental conditions and more. AI programs are also being used to remotely monitor heart patients, diabetes patients, and others with chronic conditions and help to ensure they are complying with taking their medications.

7. Direct treatment
In addition to adding better clinical outcomes with improved diagnostics and resource allocation, AI is already making a huge difference in the direct delivery of treatments. One exciting and extremely profound example of this is robotic/AI-driven surgical procedures. Minimally invasive and non-invasive AI-guided surgical procedures are already becoming quite common. Soon, all but some of the most major surgeries, such as open heart surgeries, can and will be done as minimally invasive procedures, and even the most complex “open procedures” will be made safer, more accurate, and more efficient thanks to surgical AI and digital twins of major organs such as lungs and the heart.

Rohit Mahajan is a Managing Partner with BigRio. He has a particular expertise in the development and design of innovative solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.
BigRio is a technology consulting firm empowering data to drive innovation, and advanced AI. We specialize in cutting-edge Big Data, Machine Learning, and Custom Software strategy, analysis, architecture, and implementation solutions. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.

NLP evolved to be an important way to track and categorize viewership in the age of cookie-less ad targeting. While users resist being identified by a single user ID, they are much less sensitive to and even welcome the chance for advertisers to personalize media content based on discovered preferences. This personalization comes from improvements made upon the original LDA algorithm and incorporate word2vec concepts.

The classic LDA algorithm developed at Columbia University raised industry-wide interest in computerized understanding of documents. It incidentally also launched variational inference as a major research direction in Bayesian modeling. The ability of LDA to process massive amounts of documents, extract their main theme based on a manageable set of topics and compute with relative high efficiency (compared to the more traditional Monte Carlo methods which sometimes run for months) made LDA the de facto standard in document classification.

However, the original LDA approach left the door open on certain desirable properties. It is, at the end, fundamentally just a word counting technique. Consider these two statements:

“His next idea will be the breakthrough the industry has been waiting for.”

“He is praying that his next idea will be the breakthrough the industry has been waiting for.”

After removal of common stop words, these two semantically opposite sentences have almost identical word count features. It would be unreasonable to expect a classifier to tell them apart if that’s all you provide it as inputs.

The latest advances in the field improve upon the original algorithm on several fronts. Many of them incorporate the word2vec concept where an embedded vector is used to represent each word in a way that reflects its semantic meaning. E.g. king – man + woman = queen

Autoencoder variational inference (AVITM) speeds up inference on new documents that are not part of the training set. It’s variant prodLDA uses product of experts to achieve higher topic coherence. Topic-based classification can potentially perform better as a result.

Doc2vec – generates semantically meaningful vectors to represent a paragraph or entire document in a word order preserving manner.

LDA2vec – derives embedded vectors for the entire document in the same semantic space as the word vectors.

Both Doc2vec and LDA2vec provide document vectors ideal for classification applications.

All these new techniques achieve scalability using either GPU or parallel computing. Although research results demonstrate a significant improvement in topic coherence, many investigators now choose to deemphasize topic distribution as the means of document interpretation. Instead, the unique numerical representation of the individual documents became the primary concern when it comes to classification accuracy. The derived topics are often treated as simply intermediate factors, not unlike the filtered partial image features in a convolutional neural network.

With all this talk of the bright future of Artificial Intelligence (AI), it’s no surprise that almost every industry is looking into how they will reap the benefits from the forthcoming (dare I say already existing?) AI technologies. For some, AI will merely enhance the technologies already being used. For others, AI is becoming a crucial component to keeping the industry alive. Healthcare is one such industry.

The Problem: Diminishing Labor Force

Part of the need for AI-based Healthcare stems from the concern that one-third of nurses are baby boomers, who will retire by 2030, taking their knowledge with them. This drastic shortage in healthcare workers poses the imminent need for replacements and, while the enrollment numbers in nursing school stay stable, the demand for experienced workers will continue to increase. This need for additional clinical support is one area where AI comes into play. In fact, these emerging technologies will not only help serve as a multiplier force for experienced nurses, but for doctors and clinical staff support as well.

Healthcare-AI Automation Applications to the Rescue

One of the most notable solutions for this shortage will be automating processes for determining whether or not a patient actually needs to visit a doctor in-person. Doctors’ offices are currently inundated with appointments and patients who’s lower-level questions and concerns could be addressed without a face-to-face consultation via mobile applications. Usually in the from of chatbots, these AI-powered applications can provide basic healthcare support by “bringing the doctor to the patient” and alleviating the need for the patient to leave the comfort of their home, let alone scheduling an appointment to go in-office and visit a doctor (saving time and resources for all parties involved).

Should a patient need to see a doctor,  these applications also contain schedulers capable of determining appointment type, length, urgency, and available dates/times, foregoing the need for constant human-based clinical support and interaction. With these AI schedulers also comes AI-based Physician’s Assistants that provide additional in-office support like scheduling follow-up appointments, taking comprehensive notes for doctors, ordering specific prescriptions and lab testing, providing drug interaction information for current prescriptions, etc. And this is just one high-level AI-based Healthcare solution (albeit with many components).

With these advancements, Healthcare stands to gain significant ground with the help of domain-specific AI capabilities that were historically powered by humans. As a result, the next generation of healthcare has already begun, and it’s being revolutionized by AI.

Sometimes I get to thinking that Alexa isn’t really my friend. I mean sure, she’s always polite enough (well, usually, but it’s normal for friends to fight, right?). But she sure seems chummy with that pickle-head down the hall too. I just don’t see how she can connect with us both — we’re totally different!

So that’s the state of the art of conversational AI: a common shared agent that represents an organization. A spokesman. I guess she’s doing her job, but she’s not really representing me or M. Pickle, and she can’t connect with either of us as well as she might if she didn’t have to cater to both of us at the same time. I’m exaggerating a little bit – there are some personalization techniques (*cough* crude hacks *cough*) in place to help provide a custom experience:

  • There is a marketplace of skills. Recently, I can even ask her to install one for me.
  • I have a user profile. She knows my name and zip code.
  • Through her marketplace, she can access my account and run my purchase through a recommendation engine (the better to sell you with, my dear!)
  • I changed her name to “Echo” because who has time for a third syllable? (If only I were hamming this up for the post; sadly, a true story)
  • And if I may digress to my other good friend Siri, she speaks British to me now because duh.

It’s a start but, if we’re honest, none of these change the agent’s personality or capabilities to fit with all of my quirks, moods, and ever-changing context and situation. Ok, then. What’s on my wishlist?

  • I want my own agent with its own understanding of me, able to communicate and serve as an extension of myself.
  • I want it to learn everything about how I speak. That I occasionally slip into a Western accent and say “ruf” instead of “roof”. That I throw around a lot of software dev jargon; Python is neither a trip to the zoo nor dinner (well, once, and it wasn’t bad. A little chewy.) That Pickle Head means my colleague S… nevermind. You get the idea.
  • I want my agent to extract necessary information from me in a way that fits my mood and situation. Am I running late for a life-changing meeting on a busy street uphill in a snowstorm? Maybe I’m just goofing around at home on a Saturday.
  • I want my agent to learn from me. It doesn’t have to know how to do everything on this list out of the box – that would be pretty creepy – but as it gets to know me it should be able to pick up on my cues, not to mention direct instructions.

Great, sign me up! So how do I get one? The key is to embrace training (as opposed to coding, crafting, and other manual activities). As long as there is a human in the loop, it is simply impossible to scale an agent platform to this level of personalization. There would be a separate and ongoing development project for every single end user… great job security for developers, but it would have to sell an awful lot of stuff.

To embrace training, we need to dissect what goes into training. Let’s over-simplify the “brain” of a conversational AI for a moment: we have NLU (natural language understanding), DM (dialogue management), and NLG (natural language generation). Want an automatically-produced agent? You have to automate all three of these components.

  • NLU – As of this writing, this is the most advanced component of the three. Today’s products often do incorporate at least some training automation, and that’s been a primary enabler that leads to the assistants that we have now. Improvements will need to include individualized NLU models that continually learn from each user, and the addition of (custom, rapid) language models that can expand upon the normal and ubiquitous day-to-day vocabulary to include trade-specific, hobby-specific, or even made-up terms. Yes, I want Alexa to speak my daughter’s imaginary language with her.
  • DM – Sorry developers, if we make plugin skills ala Mobile Apps 2.0 then we aren’t going to get anywhere. Dialogues are just too complex, and rules and logic are just too brittle. This cannot be a programming exercise. Agents must learn to establish goals and reason about using conversation to achieve those goals in an automated fashion.
  • NLG – Sorry marketing folks, there isn’t brilliant copy for you to write. The agent needs the flexibility to communicate to the user in the most effective way, and it can’t do that if it’s shackled by canned phrases that “reflect the brand”.

In my experience, most current offerings are focusing on the NLU component – and that’s awesome! But to realize the potential of MicroAgents (yeah, that’s right. MicroAgents. You heard it here first) we need to automate the entire agent, which is easier said than done. But that’s not to say that it’s not going to happen anytime soon – in fact, it might happen sooner than you think.  

Echo, I’m done writing. Post this sucker.

Doh!

 

In the 2011 Jeopardy! face-off between IBM’s Watson and Jeopardy! champions Ken Jennings and Brad Rutter, Jennings acknowledged his brutal takedown by Watson during the last double jeopardy in stating “I for one welcome our new computer overlords.” This display of computer “intelligence” sparked mass amounts of conversation amongst myriad groups of people, many of whom became concerned at what they perceived as Watson’s ability to think like a human. But, as BigR.io’s Director of Business Development Andy Horvitz points out in his blog “Watson’s Reckoning,” even the Artificial Intelligence technology with which Watson was produced is now obsolete.

The thing is, while Watson was once considered to be the cutting-edge technology of Artificial Intelligence, Artificial Intelligence itself isn’t even cutting-edge anymore. Now, before you start lecturing me about how AI is cutting-edge, let me explain.

Defining Artificial Intelligence

You see, as Bernard Marr points out, Artificial Intelligence is the overarching term for machines having the ability to carry out human tasks. In this regard, modern AI as we know it has already been around for decades – since the 1950s at least (especially thanks to the influence of Alan Turing). Moreso, some form of the concept of artificial intelligence dates back to ancient Greece when philosophers started describing human thought processes as a symbolic system. It’s not a new concept, and it’s a goal that scientists have been working towards for as long as there have been machines.

The problem is that the term “artificial intelligence” has become a colloquial term applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving.” But the thing is, AI isn’t necessarily synonymous with “human thought capable machines.” Any machine that can complete a task in a similar way that a human might can be considered AI. And in that regard, AI really isn’t cutting-edge.

What is cutting-edge are the modern approaches to Machine Learning, which have become the cusp of “human-like” AI technology (like Deep Learning, but that’s for another blog).

Though many people (scientists and common folk alike) use the terms AI and Machine Learning interchangeably, Machine Learning actually has the narrower focus of using the core ideas of AI to help solve real-world problems. For example, while Watson can perform the seemingly human task of critically processing and answering questions (AI), it lacks the ability to use these answers in a way that’s pragmatic to solve real-world problems, like synthesizing queried information to find a cure for cancer (Machine Learning).

Additionally, as I’m sure you already know, Machine Learning is based upon the premise that these machines train themselves with data rather than by being programmed, which is not necessarily a requirement of Artificial Intelligence overall.

https://xkcd.com/1838/

Why Know the Difference?

So why is it important to know the distinction between Artificial Intelligence and Machine Learning? Well, in many ways, it’s not as important now as it might be in the future. Since the two terms are used so interchangeably and Machine Learning is seen as the technology driving AI, hardly anyone would correct you if were you to use them incorrectly. But, as technology is progressing ever faster, it’s good practice to know some distinction between these terms for your personal and professional gains.

Artificial Intelligence, while a hot topic, is not yet widespread – but it might be someday. For now, when you want to inquire about AI for your business (or personal use), you probably mean Machine Learning instead. By the way, did you know we can help you with that? Find out more here.