A âcommon fallacyâ of NHS leaders is the assumption that new technologies can reverse inequalities, the authors add. The reality is that tools such as AI can create âadditional barriers for those with poor digital or health literacyâ.
âWe caution against technocentric approaches without robust evaluation from an equity perspective,â the paper concludes.
Thereâs reference to advanced radiology, for example, to support cancer diagnoses. Which comes at the expense of those who donât have access to it. And to your point, Venerable:
Published in the Lancet Oncology journal, the paper instead argues for a back to basics approach to cancer care. Its proposals focus on solutions like getting more staff, redirecting research to less trendy areas including surgery and radiotherapy, and creating a dedicated unit for technology transfer, ensuring that treatments that have already been proven to work are actually made a part of routine care.
In the US, the health care system as a privately-operated industry is wholly removed from US government-funded AI research ⌠so I donât see an opportunity there to divert funds in a meaningful way. (Saying this for others to consider; I know you know this.)
And the amount of US government funding for AI research pales in comparison to industry ⌠the only R&D research funding that the US government can use in a transformative way is in the defense industry â which, of course, they do, with some trickle-down effects. Sigh ⌠making bombs and bomb infrastructure while opportunists tease out civil-use technology in a wink-wink kind of way.
And, to a smaller extent, in medical science.
In a Faustian kind of deal that looks pretty good in that regard, the US health care system is shielded from bionic duckweed (or futurewashing) because speculation about AI potential doesnât generate ROI â unacceptable to stockholders no matter how attractive futurists try to make AI look in a GDP-focused way.
By contrast, the US power and transportation infrastructures are wholly dependent on federal and state-level funding for maintenance and transformation. Here I have noticed the futurewashing ⌠consistent with the articleâs reference to holding off high-speed rail investments in the UK. Also, the dilemma facing private industry and the Elon Musks of the world who must depend on these infrastructures to do what they envision.
Letâs all remember how the state of Texas, wanting to be âits own manâ, decided to operate its power infrastructure as a separate grid from federally funded hub-and-spoke infrastructures throughout the rest of the country. Thatâs not really worked out. So the Elon Musks of the world need a Faustian kind of deal with the US government; alas, itâs not available. Oh yes-- thatâs why they are now investing in nuclear fusion to power their own greedy data centers (!) (Seriously??)
It would be interesting to model the investment costs to demonstrate this. On second thought, no â itâs going down a rabbit hole. Bhante spent two months releasing his âinvestmentâ analysis and I donât think weâll see anything else in terms of its thoroughness and pointedness in the near future. (Or maybe Iâm out-of-the-know, dead-wrong, and the other papers are making their way around academia.)
Are those securing the necessary funds and building out the AI capacity the same people who would otherwise be translating? All three of those activities require different skillsets, IMO. So, I wonder whether the AI effect is, instead, de-motivating would-be translators because their work would seem irrelevant.
Who are those would-be translators, where are they incubating now, and how is that being supported in material ways? Much gratitude to SuttaCentral for helping sort this out.
