Can we help you advance your organisation through automation, data and Ai?

We'd love to help, please let us know what you need and one of our solution team will be in touch.

SIGN UP FOR HEALTH AUTOMATION UPDATES!
×

Meet the Team — Vivienne Winborne

Meet Vivienne Winborne, our Director of Communications.

Hi everyone, I'm Vivienne; I am the Director of Communications for Alphalake Ai, so I oversee the global marketing and communications strategy and manage a fantastic team of people from around the world!

 

 

How has Alphalake helped you in your career development?

Working for a start-up is a unique experience. You are always thinking on your feet, constantly evolving, and juggling a multitude of balls in the air. Because I have always thrived when I am thrown in the deep-end, Alphalake is perfect for me! Once you figure out one challenge, you are straight into the next. The last two years have definitely boosted my confidence in my own abilities about being able to handle anything that is thrown at me.

 

3 words to describe Alphalake?

Passionate, determined, collaborative.

 

Do you believe the human touch element to still be prevalent in healthcare?

The human touch is perhaps the most important element of healthcare, and I don’t believe this can ever be replaced. My belief is that automation and artificial intelligence technology should always be used to humanise healthcare by making care more personalised, more effective, and easier to access. The technology should allow healthcare workers to spend more time with patients rather than distancing them.

 

Who will be responsible for harm caused by AI mistakes – the computer programmer, the tech company, the regulator or the clinician?

This is a really good question without a straightforward answer. In some ways, they are all responsible, but this responsibility needs to be clearly defined and understood, and I don’t feel like the industry is quite there yet.

 

First, any AI deployed within healthcare needs to be thoroughly tested and closely monitored so that any errors or bias are identified. This testing and monitoring can’t occur only when the technology is deployed but must be integrated into the ongoing process. Secondly, there needs to be a deep understanding and clear transparency about where the data that underpins the AI algorithms have come from.

 

Just like when you are doing a survey, any data has limitations. As long as we understand the data set and an inbuilt bias, then we can minimise the negative impact that occurs. Thirdly, there should always be a clinician sense-checking any decision making. These steps need to be embedded into the regulations.

Join our readership, get expert insight and opinion on automation and AI in healthcare directly delivered to your inbox:
×

Together, we'll build
better Patient Experience
and a healthier world!

Subscribe!

Get early-bird guest-list for events and insights from our AI, health tech and automation subject matter experts!

UK

The Stanley Building
7 Pancras Square
London, England, N1C 4AG
Tel: +44 20 3289 0014

UAE

Level 5, Standard Chartered Tower,
Emaar Square,
Downtown Burj Khalifa

INDIA

3A / 6F, City Vista
Fountain Road, Kharadi
Pune 411 014