Hi guys, I’m Clark and I’m the Customer Success Manager for NHS North London and Trainee Junior Consultant!
How has Alphalake helped you in your career development?
Alphalake has helped me in my career development by giving me an insight into how a business runs, particularly from a start-up point of view. Working at Alphalake has also provided me opportunities to learn more about the technology and healthcare industry, but also the consultancy process! Another way in which Alphalake has helped me really develop my sales skills and my ability in approaching potential clients. This was a side that I hadn’t explored before, but thanks to Alphalake’s nurturing and supportive environment has really helped me to develop this side!
What drew you to Alphalake?
What really drew me to Alphalake was the forward vision of the company and where it wants to go. To me, the vision really drives a company (or anything in particular really) and so it got me hooked when I did my research prior to applying!
Do you believe the human touch element to still be prevalent in healthcare?
Naturally, I think humans do want a form of human connection but lines between AI and being human would be blurred. There may be a point in time where AI becomes so good at imitating human mannerisms and emotions that it would be hard to tell the difference unless stated or obvious. However, I do think that there would be an Allegory of The Cave moment where people may realise that AI is just an entity programmed by humans and the human will try to ‘escape the cave’ and seek the human touch element. But this may occur in waves/trends.
Who will be responsible for harm caused by AI mistakes – the computer programmer, the tech company, the regulator or the clinician?
Responsibility depends on what the action was and how the cause of the mistake could be traced back. When it comes to responsibility for any harm caused by AI mistakes, I would initially exclude the programmer (unless the programmer is the sole, exclusive developer of the full stack and distributer) because they are an entity working for the technology company which has its team of developers designing the processes. Then, I would turn the conversation towards what the type of harm is. If the harm caused by the AI was the result of an inference that was concluded wrongly by the clinician, the clinician would have some liability for the mistake. Otherwise, the technology company (and its developers) would have more to blame since it is their technology. If it is a regulatory mistake, in that technology companies adopted a certain protocol/variable made necessary by the regulatory body, then they would be to blame. I think the area of responsibility depends on the actions taken and whether those actions can be drawn back to the clinician, regulatory body, or the technology company.