Johns Hopkins Univ., $24.95
How does misinformation spread? What causes medical myths and pseudoscience to rapidly infect and fester in society? Seema Yasmin, an epidemiologist and author of a new book, Viral BS, has a diagnosis: the pervasive, persuasive power of storytelling. And, as Yasmin notes, “The more fantastical, the better.”
Take the anecdote that opens the book: A woman in Texas demands an Ebola vaccine for her daughter as a deadly outbreak rages a continent away in Africa in 2014. When the pediatrician tells her there is no Ebola vaccine and that her daughter faces a much greater risk from the flu, for which he can give her a vaccine, the mother storms out: “Flu vaccine?! I don’t believe in those things!”
Stories — like those this Texas woman may have heard, or maybe told herself — help us find order in a world bursting with uncertainty. But when these stories don’t reflect reality, a public malady of tenacious and preposterous medical myths can take hold, Yasmin explains. Her book sets out to treat this malady with a dose of the virus itself: Storytelling and anecdotes that move beyond dry facts and figures to reveal pseudoscience’s sticking power.
Yasmin sets up her credentials in the book’s opener — physician, director of the Stanford Health Communication Initiative, former epidemiologist at the U.S. Centers for Disease Control and Prevention — to build trust among readers. But, true to form, it’s her anecdotes of pseudoscience in her own upbringing that linger. Her India-born grandmother told her that the moon landing was a fake; as a child Yasmin would pray to the “unwalked upon moon” for clarity and vision. Yasmin and her cousins once secretly listened to Michael Jackson songs for signs of Satan worship — which an older cousin claimed were there. “Raised on conspiracy theories,” she writes, “I understand why a patient might refuse medications, say chemtrails are poison, or shun vaccines, even as I bristle at the public health implications of these beliefs and behaviors.”
Each chapter answers a question in a few pages of no-nonsense basics. The book tackles a slew of questions that have spread from the internet to dinner tables in recent years. These include: Is there lead in your lipstick? Do vaccines cause autism? Has the U.S. government banned research about gun violence (SN: 5/14/16, p. 16)? She analyzes the pseudoscientific answers that become hard to shake and reviews related research that presents the truth. The antidote is easy to swallow, thanks to Yasmin’s approach.
For instance: Should you eat your baby’s placenta? In chapter 2’s breezy three pages, Yasmin points to celebrities such as Kim Kardashian who say eating their placentas helped them with postpartum recovery. Then Yasmin quickly moves to studies that have found no medical benefits. In fact, studies point to potential harm from the practice, since the organ can carry feces, inflammatory cells and bacteria (SN Online: 7/28/17).
She pulls no punches, referring to doctors who claim to be able to cure autism as “charlatans” who offer expensive, unproven and sometimes dangerous practices. Children have died, Yasmin writes, after being given Miracle Mineral Solution as an autism cure. The solution is actually industrial bleach. She rejects the overenthusiastic prescribing of vitamin D supplements for everything from obesity to cancer (SN: 2/2/19, p. 16), showing that the evidence of a benefit isn’t there, at least not yet.
Some of the issues she addresses seem ludicrous on first glance, like “Can a pill make racists less racist?” Actress Roseanne Barr claimed that the drug Ambien made her post a racist tweet in 2018. Yasmin looks at the opposite notion, sparked by a 2012 study that linked heart disease medications to a reduction in racial bias. She explains how the drugs affect the body and how researchers tested for racial bias. Then she shifts to the dangers of trying to medicalize racism, which is not a medical phenomenon.
The book ends with a tear-out “bullshit detection kit,” a list of 12 useful tips to keep in mind when weighing the credibility of a headline, research study or tweet. Questions to consider include: Who is funding the person or organization making the claim? Has a claim been verified by those not affiliated with the source? She explains how to run a reverse online search on an image to determine whether it was doctored and to learn its original source. This list will be particularly relevant to those navigating through all the misinformation swirling around COVID-19.
Readers will come away from this book with a deeper understanding of what research studies can and cannot say, and the effects that storytelling and celebrity have on whether someone internalizes a health claim. Some readers might prefer more background science for each question — for a book that aims to crush pseudoscience, a bibliography or at least footnotes would have been useful. But perhaps this omission is part of Yasmin’s broader point. For casual readers, references and statistics miss the mark. Instead, anecdotes in easy-to-swallow doses may be just the right amount of information and storytelling needed to stop the spread of viral BS.
Buy Viral BS from Bookshop.org. Science News is a Bookshop.org affiliate and will earn a commission on purchases made from links in this article.
This content was originally published here.
Fishing, First Aid, Health Education: Riverland Leaders Work to Ensure Aboriginal People Prosper – ABC News
The Murray River has long been a place of importance for Australia’s First Nation’s people and local leaders are working to make sure the Riverland’s Aboriginal community continues to prosper.
The Riverland and Mallee region in South Australia is home to a number of different Aboriginal language groups, including the Ngarrindjeri people in the Southern Lakes and Coorong and the Erawirung people further upstream.
Aboriginal history in the region is estimated to be as much as 40,000 years old, according to research undertaken in 2019.
This year’s NAIDOC Week theme is Heal Country, something Riverland NAIDOC Week committee president and proud Ngarrindjeri woman Christine Abdulla said was particularly important to Aboriginal people in the Riverland.
“It’s about time everyone came together as one and started recognising our country needs to be held and in a way that recognises our sacred sites in the Riverland,” she said.
ABC Riverland: Anita Ward
“Riverland Aboriginal history, black history, up here is over 30,000 to 40,000 years old and there’s a lot of education surrounding that [as well as] healing needing to be done.”
Taking mental health into the ring
Locals gathered for a different sort of training at the Riverland Boxing Club over the weekend … swapping the boxing gloves for a First Aid kit to talk all things culture and mental health.
Aboriginal and Torres Strait Islander Mental Health First Aid training is designed to empower people with the necessary tools to manage mental health issues among their friends and their community.
Event organiser Sam Mitchell said it was a crucial training program that addressed a “very real issue” in the community.
“It gives you a bit more of an insight on some of the effects, why some of our people are experiencing mental health illnesses,” Mr Mitchell said.
“So, when you’re working with our mob and our community, you have a bit more of an understanding on why some people may be experiencing a higher rate of anxiety and depression … especially when our younger men have higher suicide rates.”
Supplied: Sam Mitchell
The training, held in Loxton over the weekend, provided a historical, cultural and social context behind some of the mental health challenges that Aboriginal and Torres Strait Islander people may be facing.
Mr Mitchell said the feedback he has received from partipants about the training program had been encouraging, as well as humbling, with people raising the importance of their Welcome to Country.
“We do our Acknowledgment of Country and that is so important when it comes to welcoming people to the community,” Mr Mitchell said.
He said further first aid mental health training programs will be rolled out in September to engage more Aboriginal and Torres Strait Islanders in the community, and empower them with the tools to seek help and support.
Group fosters reconciliation message
ABC Riverland : Sam Bradbrook
Fishing is something loved by the Riverland community, so a decade ago the local Aboriginal Sobriety Group thought it would be a way to get people talking about reconciliation.
Aboriginal Sobriety Group senior manager Don Scordo said that, each year, around 100 Aboriginal and non-Aboriginal people gather at Berri’s Martin Bend for a day of fishing and to share knowledge about Aboriginal cuilture.
“In 2019, we might have pulled in 20 fish and that’s probably the most we’ve ever had — so, fish production wise, it’s not that good,” Mr Scordo said.
“It’s all about the reconciliation and working together because that’s what we’re working towards.
“It gives us a sense of pride, of knowing that the land we’re on is Aboriginal land. We need to educate people because I feel like we’re still behind in regards to that.”
Riverland locals have been acknowledged for their ongoing work and support for the Aboriginal and Torres Strait Islander community at a special awards ceremony in Berri.
The recipients of the awards range from health care professionals, educators, child care workers, artists and sportspeople, all of whom have worked tirelessly to empower the community.
ABC News: Kelly Hughes
Dudley Campell received the Scholar of the Year Award, which paid tribute to the ongoing work he does to tackle drug and alcohol addiction in the community.
“NAIDOC means a lot to my family and friends. It’s a place to start gathering again, sharing history of our own mob, our connections to our land and people,” he said.
Mr Campell said he hoped to continue learning about primary healthcare in the Aboriginal and Torres Strait Islander community.
“Just supporting my own mob in addressing some of the issues, regardless if it’s around drugs and alcohol, or just any other areas, just empowering them to make better choices in life,” Mr Campell said.
Donna Quinn — who was awarded Person of the Year at the ceremony — was recognised for her work in training junior doctors.
Like Mr Campell, Ms Quinn was surprised but humbled to receive the recognition during NAIDOC Week.
“It means a day of celebration for my culture, and the Torres Strait Islanders. It’s a day I’m extremely happy to celebrate, and a week everyone should take the time out to enjoy,” Ms Quinn said.
“I just want to keep working in the community and be a good role model.”
If you or anyone you know needs help:
This content was originally published here.
Go Ahead, AI—Surprise Us
By KIM BELLARD
Last week I was on a fun podcast with a bunch of people who were, as usual, smarter than me, and, in particular, more knowledgeable about one of my favorite topics – artificial intelligence (A.I.), particularly for healthcare. With the WHO releasing its “first global report” on A.I. — Ethics & Governance of Artificial Intelligence for Health – and with no shortage of other experts weighing in recently, it seemed like a good time to revisit the topic.
My prediction: it’s not going to work out quite like we expect, and it probably shouldn’t.
“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” Dr Tedros Adhanom Ghebreyesus, WHO Director-General, said in a statement. He’s right on both counts.
WHO’s proposed six principles are:
Protecting human autonomyPromoting human well-being and safety and the public interestEnsuring transparency, explainability and intelligibility Fostering responsibility and accountabilityEnsuring inclusiveness and equity Promoting AI that is responsive and sustainable
All valid points, but, as we’re already learning, easier to propose than to ensure. Just ask Timnit Gebru. When it comes to using new technologies, we’re not so good about thinking through their implications, much less ensuring that everyone benefits. We’re more of a “let the genie out of the bottle and see what happens” kind of species, and I hope our future AI overlords don’t laugh too much about that.
As Stacey Higginbotham asks in IEEE Spectrum, “how do we know if a new technology is serving a greater good or policy goal, or merely boosting a company’s profit margins?…we have no idea how to make it work for society’s goals, rather than a company’s, or an individual’s.” She further notes that “we haven’t even established what those benefits should be.”
Ms. Higginbotham isn’t specifically talking about healthcare, but she could be. We can’t really agree on what a healthcare system should and shouldn’t do, much less one augmented by A.I. It’s no wonder that our first generations of A.I. in healthcare are confused.
The example that I’ve been using for years is that we can’t even agree on how human physicians seeing patients in other states via telehealth should be licensed/regulated, so how are we going to decide how a cloud-based healthcare A.I. should be?
Carissa Véliz has an idea. Writing in Harvard Business Review, she suggests that the FDA test AI like it does prescription drugs or medical devices, using randomized control trials to prove validity and efficacy. I’d feel better about that if we didn’t already have a lot of history of that process taking too long, being swayed by non-data driven factors (e.g., Aduhelm), or being frequently circumvented.
It gets worse. Christopher Mims just wrote about how AI is moving from the cloud to edge devices (like your phone or home appliance). Edge computing is going to be a big part of our future, including healthcare, but, as computer science professor Elisa Bertino pointed out to him, how can anyone certify/regulate AI that is evolving on its own, in the real world? It won’t necessarily resemble the A.I. that it started out as; it’s going to depend on the data/inputs it receives.
Mr. Mims also warns: “Modern AI, which is primarily used to recognize patterns, can have difficulty coping with inputs outside of the data it was trained on.” Oh, boy — it’s going to run into a lot of that with health care. People are messy, so to speak, and a lot of that mess impacts their health. A.I. better be ready to deal with it.
AI is going to evolve much more rapidly than other healthcare technologies, and our existing regulatory practices may not be sufficient, especially in a global market (as we’ve seen with CRISPR). Not to be facetious, but we may need AI regulators to oversee AI clinicians/clinical support, just as we may need AI lawyers to handle the inevitable AI-related malpractice suits. Only another black box may be able to understand what a black box is doing.
I worry that we’re thinking about how we can use A.I. to make our healthcare system do more of the same, just better. I think that’s the wrong approach. We should be going to ground principles. What do we want from our healthcare system? And, then, how can A.I. help get us there?
For example, we should want that everyone has access to affordable health care – when they need it, where they prefer it. That health care should tailored to the individual, including genetics, environment, and socio-economic status, and should be based on solid evidence. That all sounds like a list of the usual platitudes, but none of it is currently true. How can A.I. help make it true, or, at least, truer?
If A.I. for healthcare is a better Siri or a new decision support tool in an EHR, we’ve failed. If we’re setting the bar for A.I. to only support clinicians, or even to replicate physicians’ current functions, we’ve failed. We should be expecting much more.
E.g., how can we use A.I. to democratize health care, to get advice and even treatment in people’s hands? How can we use it to help health care be much more affordable? How can A.I. help diagnose issues sooner and deliver recommendations faster and more accurately?
In short, how can A.I. help us reorient our health care from the healthcare system that delivers it, and the people who work in it, to our health? If that means making some of those irrelevant, or at least greatly redefining their roles, so be it.
Right now, much A.I. work in healthcare seems to be focused primarily on granular problems, such as diagnosing specific diseases. That’s understandable, as data is most comparable/available around granular tools (e.g., imaging) or conditions (e.g., breast cancer). But our health is usually not confined within service lines. We need more macro A.I. approaches. We might need A.I. to tell us how A.I. can not just improve our healthcare but also to “fix” our healthcare system. And I’m OK with that.
Kim is a former emarketing exec at a major Blues plan, editor of the late & lamented Tincture.io, and now regular THCB contributor.
Spread the love
Source Here: thehealthcareblog.com
Matthew’s Health Care Tidbits
Jul 3, 2021• 0
Each week I’ve been adding a brief tidbits section to the THCB Reader, our weekly newsletter that summarizes the best of THCB that week (Sign up here!). Then I had the brainwave to add them to the blog. They’re short and usually not too sweet! –Matthew Holt
In this week’s health care tidbits, a little bit of light was shone on two of the dirty tricks health insurers play. First San Diego is suing Molina, Centene (owner of Healthnet) & Kaiser for misleading patients about which providers are in their networks. Apparently Healthnet & Kaiser’s directories were 35% inaccurate and Molina 80%! Now this may be incompetence, but it is not only false advertising, it’s also a way of weeding out high cost patients who may leave when they can’t find a specialist that will take them–and of course avoiding a high cost patient is a nice earner for health plans.
The next trick is double billing. In this lawsuit unearthed by Bob Herman of Axios, Aetna which was being paid to manage an employer’s health network subbed out PT care to an Optum network. Optum then also charged an admin fee. Meaning the provider got less and the patient had to pay more. So while Aetna and United Healthgroup may appear to be fierce competitors, they’re happy to cooperate when it comes to ripping off their clients.
More bad behavior by health plans and I didn’t even mention them cheating on Medicare Advantage RAFs! But the CEO of Chenmed did.
If we are going to let health insurers profit from handling employer and taxpayer business, we should see those arrangements in the clear light of day. Time for some heavy handed Federal regulation, methinks.
Spread the love
Biotech3 months ago
Scientists Show the Importance of Contact With Nature in the City During the Lockdown
Biotech3 months ago
Climate Changed the Size of Our Bodies And, to Some Extent, Our Brains
Finance3 months ago
Tyrone Ross Launches Onramp Invest | Financial Planning
Biotech3 months ago
Protein’s ‘silent Code’ Affects How Cells Move
Biotech3 months ago
ETRI Solves the Challenge in the Next-generation Display Micro LED
Business3 months ago
Go Ahead, AI—Surprise Us
Business3 months ago
Matthew’s Health Care Tidbits
Tech3 months ago
Helping COVID-19 Long-Haulers With the Power of AI | Dell Technologies