There is a concern among regulators that robo advisers do not make their selection process clear to customers - so algorithmic advice may be biased, especially in those cases where there is a choice of product providers. There are several ways such sites can lead clients to believe that their choice is perhaps wider and the process fairer than it might be. One is simply the way the starting set of companies is selected - if you have a mid-market product and you want to favour that one then it is easy to only select products that are in some obvious way 'worse' - say more expensive or offering less cover. The manner in which you apply selection criteria can affect outcomes too. If you apply selection criteria in a series of 'rounds' where you eliminate products you can imagine that if you knock out, say, all the products that are more expensive first, then in the second round you may find it easy to win on a coverage criteria. Even considering criteria in a balanced scorecard can be gamed in some ways - such as the weighting applied to each factor. That is why the Financial Conduct Authority in the UK is interested in how robo-advice works, and in particular how they meet requirements to check suitability, but also other advice safety issues.
It is not just bias in sales that concerns us - it also happens in underwriting and probably in claims. Humans are prone to bias in favour of groups to which they belong, and by implication against the others. That can reinforce existing societal prejudices and make life harder - more expensive policies and more declined claims - for minorities. Is there hope? This article from Daniel Schreiber, CEO & Co-founder at Lemonade explains how he thinks an AI we may never understand can eliminate discrimination and bias in insurance. Of course, I note that he isn't talking about bias in insurer selection - Lemonade is a single insurer offer, not a marketplace.