by Tim Kilpatrick
on November 22, 2023
We all know someone who lacks the three-word phrase “I don’t know” in their vocabulary. I’ll call him Bob. Bob confidently answers questions with fabrications that can make you question what day it is. After second guessing myself and double checking the facts, I’m able to conclude it’s just Bob BS.
Do Generative AI models hallucinate or BS like Bob?
A new study argues that the perception of AI intelligence is marred by linguistic confusion. While AI, such as ChatGPT, generates impressive text, it lacks true understanding and consciousness.
University of Cincinnati professor Anthony Chemero contends that AI cannot be intelligent in the way that humans are, even though “it can lie and BS like its maker.”
While Generative AI technology like ChatGPT is breathtakingly amazing, we need to be confident we are not talking to Bob. Although Bob is very smart, I would want his advice on life threatening medical decisions.
More from the builders of AI.
“I don’t think that there’s any model today that doesn’t suffer from some hallucination,” said Daniela Amodei, co-founder and president of Anthropic, maker of the chatbot Claude 2.
“They’re really just sort of designed to predict the next word,” Amodei said. “And so there will be some rate at which the model does that inaccurately.”
{ }
by Tim Kilpatrick
on November 22, 2023
Our brains are constantly changing, making it hard for scientists to determine the exact changes made to formulate a memory or something learned.
A new study aimed to understand how information may be stored in the brain.
“Memory engram cells are groups of brain cells that, activated by specific experiences, change themselves to incorporate and thereby hold information in our brain. Reactivation of these ‘building blocks’ of memories triggers the recall of the specific experiences associated to them. The question is, how do engrams store meaningful information about the world?”
“In 21st century neuroscience, many of us like to think memories are being stored in engram cells, or their sub-components. This study argues that rather than looking for information within or at cells, we should search for information between cells, and that learning may work by altering the wiring diagram of the brain – less like a computer and more like a developing sculpture.
“In other words, the engram is not in the cell; the cell is in the engram.”
It would be interesting to know how many of our 86 billion neurons and 100 trillion connections are required to store various memories.
{ }
by Tim Kilpatrick
on November 8, 2023
{ }
by Tim Kilpatrick
on November 7, 2023
The recent rollouts of Large Language Models (LLMs) (a.k.a Foundational Models) have provided hundreds of millions of people hands-on experience with what is possible with AI. We are getting to know the amazing things ChatGPT (OpenAI), Claude (Anthropic), LLaMA (Meta), PaLM (Google) and LaMDA (Google) can do, along with where they need work.
The University of Pennsylvania Professor Ethan Mollick has been a leader in the “use of LLMs” discussions. He explores what could be next with AI Agents.
Many people think the future of AI lies in “agents” – a fuzzily-defined term that refers to an autonomous AI program that is given a goal, and then works towards accomplishing it on its own. There has been a lot of buzz about agents over the past few months, but not much technology that actually works well.
As always, Professor Mollick provides examples of how this would work.
While it is easy to imagine AI agents, it may be hard to know when we will trust carrying our goals with our money.
{ }
by Tim Kilpatrick
on November 3, 2023
There are few things more complex than clinical decision making. We ask our physicians for diagnosis, prognosis and treatments based on limited sets of known factors. They certainly can’t imagine the action, reaction, and interaction of 42 million proteins within each cell of our 30 trillion human cells, nor understand the 60 – 85% of determinants of an individual health outcomes (which doesn’t include healthcare and genetics) or keep up with the two million peer-reviewed medical journal articles published each year. Hopefully, one day AI will help them with that.
When will physicians and patients be able to trust AI to help?
Christina Jewett provides some insight in her New York Times article.
The F.D.A. has approved many new programs that use artificial intelligence, but doctors are skeptical that the tools really improve care or are backed by solid research.
Google has already drawn attention from Congress with its pilot of a new chatbot … designed to answer medical questions, raising concerns about patient privacy and informed consent.
She writes how physicians are being careful with AI using it as a scribe, for occasional second opinions and to draft various reports. Physicians don’t trust the 350 FDA approved AI powered solutions, thus increasing healthcare cost with duplicate efforts (AI and physician) and false positives. AI has shown some benefits such as expediting stroke treatments by moving brain scans to the top of radiologist inbox if the AI detected a stroke.
Generative AI has produced great benefits for software coders, generating first draft of the desired software code using standalone point solutions like Chat GPT. The promise is one day Generative AI will be able to help doctors make sense of numerous factors contributing to a health condition. We will also need physicians to make sense of the credibility of the AI.
{ }
by Tim Kilpatrick
on October 30, 2023
To ensure the use of preventive care like shots and services, the Affordable Care Act of 2010 requires health plans to pay for these services without charging their members. Free is the incentive for us to do the right thing like colonoscopies without copayments or using our deductibles. The preventitive care benefits cover 22 adult services, 27 additional women services and 29 services for children.
Christine Rogers, 60, of Wake Forest, North Carolina is insured by Cigna Healthcare through her job. Christine had an annual wellness visit that included typical blood tests as well as a depression screening and discussion with a physician. Cigna was billed $487, which included a $331 wellness visit and a separate $156 charge for what was billed as a 20- to 29-minute consultation with her physician. Her insurer paid $419.93, leaving Rogers with a $67.07 charge related to the consultation.
What is the catch?
Not all care provided during a wellness visit counts as no-cost preventive care under federal guidelines. If a health issue arises during a checkup that prompts discussion or treatment — say, an unusual mole or heart palpitations — that consult can be billed separately, and the patient may owe a copayment or deductible charge for that part of the visit.
Who is supposed to understand all the nuances of Federal Regulations? Our healthcare system is too complex. Gallup found that 38% of patients defer care because of cost. I wonder how many people defer free preventative care because that don’t believe it to be true.
{ }
by Tim Kilpatrick
on October 24, 2023
Henry Kissinger once said about societal collapse of complex societies, “Every civilization that has ever existed has ultimately collapsed. So, as a historian, one has to live with a sense of the inevitability of tragedy.”
Kate offers insight into potential answers mentioning Jared Diamond’s 2005 book Collapse: How Societies Choose to Fail or Succeed that describes the Roman Empire the first bubonic plague due to volcanos lowering temperatures and Maya’s in Central America linked to major drought.
This is a well-researched topic which Kate references:
Every day we wake up to a society a little more complex than the day before. It may be helpful to learn the previous advanced societies that were also very complex.
Tagged as:
Manageing Complexity,
Societal Collapse
{ }
by Tim Kilpatrick
on October 20, 2023
Athough AI has great potential to help humans strategize beyond the limits of 86 billions neurons, it has many challenges including trust. Introducing the Foundation Model Transparency Index. Organizations investing in layering applications on these foundational models certainly need this.
From Sayash Kapoor on his site aisnakeoil.com. Sayash was recently included in the TIME 100 Most Influential People in AI.
Foundation models such as GPT-4 and Stable Diffusion 2 are the engines of generative AI. While the societal impact of foundation models is growing, transparency is on the decline, mirroring the opacity that has plagued past digital technologies like social media. How are these models trained and deployed? Once released, how do users actually use them? Who are the workers that build the datasets that these systems rely on, and how much are they paid? Transparency about these questions is important to keep companies accountable and understand the societal impact of foundation models.
{ }
by Tim Kilpatrick
on October 10, 2023
2023 Nobel Prize awards go to those that shrink human knowledge.
Anne L’Huillier, Pierre Agostini and Ferenc Krausz shared the 2023 Nobel Prize in Physics for producing laser pulses lasting mere attoseconds.
One attosecond is one-quintillionth of a second, or 0.000000000000000001 seconds. More attoseconds pass in the span of one second than there are seconds that have passed since the birth of the universe.
Katalin Karikó and Drew Weissman share the Physiology or Medicine Nobel Prize for development of mRNA to provide instructions to cells to make proteins (10 nanometers). A typical atom is 0.1 to 0.5 nm in diameter. DNA molecules are about 2.5 nanometers wide. A typical virus is about 100 nm wide. The work led to the development of Covid-19 vaccines (75 – 89 nM) administered to billions around the world.
Moungi G. Bawendi, Louis E. Brus and Alexei I. Ekimov are awarded the Nobel Prize in Chemistry 2023 for the discovery and development of quantum dots (1.5 – 10 nm). These tiny nanonanoparticles are essential for a wide range of applications including LED displays, solar cells, and biomedical imaging.
Expanding human knowledge of our attosecond and nanoscale world enables more innovation as well as more complexity to manage.
{ }
by Tim Kilpatrick
on October 2, 2023
Interesting new Alzheimer’s research that looks beyond amyloid plaques buildup in the brain.
Researchers conducted the most extensive analysis on the genomic, epigenomic, and transcriptomic changes in Alzheimer’s patient brains. By analyzing over 2 million cells from 400 postmortem samples, they offer insight into the interplay of four areas to help treat Alzheimer’s disease.
Transcriptome – RNA-sequencing to analyze the gene expression patterns of 54 types of brain cells. They found impairments in the expression of genes involved in mitochondrial function, synaptic signaling, and protein complexes needed to maintain the structural integrity of the genome.
Epigenomics – The chemical modifications that effect gene usage within a given cell. The found they occur most often in microglia, the immune cells responsible for clearing debris from the brain.
Microglia – brain cells that make up 5 to 10 percent of the cells in the brain. They clear debris from the brain, are immune cells that respond to injury or infection and help neurons communicate with each other. They found as Alzheimer’s disease progresses, more microglia enter inflammatory states, the blood-brain barrier begins to degrade, and neurons begin to have difficulty communicating with each other.
DNA damage – during memory formation, neurons create DNA breaks. These breaks are promptly repaired, but the repair process can become faulty as neurons age. They found that as more DNA damage accumulates in neurons, it gets more difficult to repair the damage, leading to genome rearrangements and 3D folding defects.
{ }
Recent Comments