Back Resolutions
Back Current Initiatives
Back Donate

As the use of artificial intelligence (AI) in health care continues to proliferate, so too do concerns about the potential for algorithmic bias to negatively impact patient safety, equity of care and treatment decisions. Companies that create and market healthcare technologies such as clinical support decision tools, and electronic health records have a responsibility to ensure these products don’t unintentionally contribute to or exacerbate healthcare inequities or discriminatory practices.. Despite several attempts by government agencies to enact accountability policies, the tech industry’s impact on the healthcare sector remains largely unregulated, creating and exacerbating structural racism, particularly against BIPOC, low-income, and Queer and gender non-conforming communities.

Our Theory of Change

ICCR members call for AI accountability and transparency by pressing healthcare companies to acknowledge where the risks of algorithmic harm exist and to disclose their plans for preventing, identifying, and mitigating these harms throughout their product lifecycles.

The Business Case for Action

By conducting the necessary due diligence to prevent, mitigate, and remedy algorithmic harm throughout the lifecycles of their products and processes, healthcare technology companies can avoid costly financial and legal risks.

Current Initiatives

Via a newly-launched campaign, our members are engaging companies in the medical device and diagnostics, health insurance, health technology and industrials sectors to ensure that they are actively developing policies and processes to prevent harms created by algorithmic and machine learning.

Our Impact

How ICCR’s members are pressing companies to prevent harms created by algorithmic and machine learning in the health technology and related sectors.

Cerner Corp.

Agreed to provide disclosures on its principles and policies addressing AI fairness and transparency on its website in response to investor pressure.