FB Pixel no script[Tuning In] Liu Feng-Yuan on AI governance: don’t view innovation and governance as oppositions | KrASIA

[Tuning In] Liu Feng-Yuan on AI governance: don’t view innovation and governance as oppositions

Written by Emily Fang Published on   2 mins read

Liu highlights the repercussions of innovating without a policy in mind.

Liu Feng-Yuan is the co-founder and CEO of BasisAI. Prior to this he spent many years harnessing data science and AI for the public good as part of Singapore’s Smart Nation initiative. You can read BasisAI’s blog on responsible AI here.

This interview has been edited for brevity and clarity. 

KrASIA (Kr): How does policy catch up with the always evolving technology ecosystem? 

Liu Feng-Yuan (LFY): Regarding your statement on the disjunction between policy and technology, I think it’s very real.

I spent my early career in the public sector and public policy. When I think about legislation, lawyers are the ones drawing up the rules, not engineers, because they’re writing code. When I was part of the Smart Nation office, we started to hire software engineers and data scientists to write code. They were public servants. When a policymaker or lawyer writes a rule, often an engineer looks at it and goes,  ‘I can’t implement that. It’s not that I don’t want to, but I don’t even know what that means. This rule could just as well be described in a foreign language.’

I think a lot of what we’re doing is looking at the regulations around artificial intelligence (AI) governance. The Singapore government, together with the World Economic Forum has released a set of principles on AI governance. What the AI operating system Bedrock [from BasisAI] tries to do is help engineers and data scientists with the right tools to do the right thing with positive constraints. We don’t want responsible AI to just be about checking a box. We help provide data scientists building AI models with a paved road, so they can do their work better and faster.

If you want to do the wrong thing, you can walk off the cliff, but if you want to do the right thing, let us empower you to do the right and responsible thing.

How does this work in Bedrock? For every AI application that a scientist builds, we will audit and vet it for how it treats gender or ethnicity.  It will look at whether this model is treating men and women differently. If it does, and that breaches rules, then the software won’t get deployed live.

Individuals and the leaders within your organization will have to make that call, and give you the tools to make stakeholders aware of the implications of your choice about the AI model. The key is to provide better visibility into the trade-offs. That’s what I mean by bridging between policy and tech.

To continue reading, click here to hop to Oasis, by KrASIA.


Auto loading next article...