The Digital Ethics Charter was created as a set of common principles that digital professionals and those working with "data and technology for public use" can adhere to. The project was the brainchild of Ciaron Hoye (Deputy CIO Birmingham & Solihull CCG and Midlands Accord) and Kate Walker (Suffolk & North East Essex ICS Digital Programme Director and Programme Lead East Accord),
It has been just 13 short months in its journey - for professionals to be able to make a pledge where they would think and work within an ethical code of conduct; promoting the rights of the people and organisations they serve. Continuing to gain support, it now has widespread acknowledgement from digital professionals across England within the public sector to confirm that the proper and correct use of the data they access and use is appropriately maintained.
In a Gartner CIO Survey this year, a definition of digital ethics was offered as “comprising the systems of values and moral principles for the conduct of electronic interactions among people, business and things”. A number of other definitions may be valid but what is increasingly apparent is that digital ethics is a concern for us all in the highly evolving and complex environment that we operate in. The manner in which we conduct ourselves, the ethical and moral choices we make and the representation of our own belief systems all intertwine such that to have a guiding compass of ethical behaviour becomes ever more important.
Needless to say, artificial intelligence (AI) focuses a spotlight of accountability on electronic and technological decision making and the governance structures that wrap around collecting, using, accessing and protecting our data. Principles of stewardship, trust, fairness and transparency walk hand-in-hand with clinical responsibility, legal duty, medical risk and citizens rights across geographical boundaries.
If a possible future of our medico-tech ecosystem is the proposal to have people and machines co-existing and to automate working tasks as well as administrative and clinical decision making then, for their part, digital professionals must be assured that they safeguard the consequences of such a digital revolution and act as trusted guardians and / or gatekeepers.
The use of AI in healthcare is not new with published guidance and professional practice for social care already considered by the Society for innovation, technology and modernisation (Socitm). However, Covid-19 has seen an emergence, adoption and use of such technologies at an accelerated rate and scale where technology, and the science behind it, is being leveraged to inform both clinical interventions and individual (patient) ownership of health management.
AI is changing the landscape of healthcare to incorporate a wider social responsibility outside traditional medical transactions and interactions; as well as treatments and interventions. This medico-tech ecosystem provides qualitative and quantitative data at a pace and form that has the potential to create a much bigger picture of health for all societies and, moreover, to inform and drive decision making for those societies outside the boundaries of health and care.
It might be said that we are on the precipice of health and social advancement - the challenge is how that is managed? Yesteryear ring-fenced data to clinicians and other health and care professionals to inform decision making and treatment options. Today we enter into a realm of AI tools where the level of (personal) granular data is unprecedented and the use of machine learning algorithms and predictive modelling are acceptable practice.
As digital professionals, indeed all technology and data stakeholders, we have a responsibility to preserve the integrity, privacy and utilisation of the data we hold. Let us maintain our ethical practices for today and our future - sign the charter and pledge support for an ethical tomorrow.
Gartner 2020 CIO survey cited in ‘Digital Ethics: The Age of Artificial Intelligence’ Frank Buytendijk 2020