9 Responses

  1. Daniel Tunkelang
    Daniel Tunkelang March 10, 2009 at 13:57 |

    I think we mostly agree. When users are satisfied recommendations, they don’t even think about transparency. Transparency comes in when the recommendations aren’t good enough–which is, in my experience, inevitable. At that point, transparency kicks in. But if the system can’t offer transparency, then the user loses faith in a system that can’t do better than “trust me” + “my bad”.

  2. Vegard Sandvold
    Vegard Sandvold March 10, 2009 at 20:26 |

    @Daniel
    I have very few reasons to disagree with you, but I’m keen to explore this small difference in our views (if there really is one). So for the sake of argument, I want to ask you this…

    Can you name a use case for transparency, in a situation where the user is in danger of loosing faith in a recommender system? I’m curious, and would like to hear your thoughts before I settle for agreement :-)

  3. Daniel Tunkelang
    Daniel Tunkelang March 10, 2009 at 20:42 |

    Sure. In the past, I found that Netflix made recommendations for me that are better than random, but by no means good enough for me to decide to watch a movie. If I’d understood why it was recommending a particular movie, I might have developed more faith in the recommendations. As it was, I learned not to trust the recommendations enough to act on them, especially given that watching a movie is a significant time investment into an experiential good.

    Actually, checking now, I see that Netflix *is* trying to be more transparent by telling me what movies are the basis for its particular suggestions, i.e., you’ll like X because you enjoyed Y. Still, it makes (to me) absurd suggestions, like telling my I’ll like The Motorcycle Diaries because I liked Y tu mamá también, Harold and Maude, and Amelie. Huh?! What does a biopic about Che have to do with any of these three movies? As it happens, I’ve seen all four movies, and I’m actually in a position to know that the recommendation makes no sense. So that undermines my faith in the other recommendations for movies I haven’t seen.

    More transparency would help. If I know *why* it made a bad recommendation in this case, I’d be more forgiving. Better yet, if the system let me correct its mistake in a way that expressed my disagreement with its logic (e.g., no, I don’t like all indie films), then I’d build trust in the system–assuming, of course, that it learned from my feedback.

  4. Mikael Svenson
    Mikael Svenson March 11, 2009 at 13:19 |

    You create a recommendations system and you show the users why X and Y was recommended. The user examines the parameters chosen for the recommendations, and find them not relevant or sub-optimal. How many “bad” suggestions is a user willing to accept before loosing faith in the system all-together?

    If the service itself provides good value you can ignore it and continue using the “core” functions until the system improves. If it’s a newcomer service you don’t have that many attempts to satisfy the user.

    That’s where suggestions or ratings from people you know come in hand. If your friend Bob watched a movie and liked it and you often go to the movies with Bob, then his suggestion far outweights a random John watching the same movie and recommending it.

  5. Vegard Sandvold
    Vegard Sandvold March 11, 2009 at 20:55 |

    @Daniel
    Systems that refuse to learn, or that learn in a too random fasion, are really painfull to work with. I get extremly frustrated with spellcheckers, on the iPhone and OpenOffice. It’s like spending time with an obnoxious kid that refuses to listen. Tiresome, to say the least.

    From what you’re describing, it seems like you’re having a less than satisfactory user experience with Netflix. That’s not good. They clearly have to improve the quality of the recommendations in order to gain your trust.

    I don’t use Netflix myself, so I can’t speak from my own experience. But I have some doubts to how transparency would improve my feelings towards mediocre recommendations. Could it actually be that transparency gives you more reasons to be distrustfull and critical to the recommender? Perhaps you would be second-guessing each recommendation, imagining more vividly how pleasing it would be to watch the movies you passed up?

    I have to admit that I’m speculating wildly here. Does it seem at all plausible to you?

  6. Vegard Sandvold
    Vegard Sandvold March 11, 2009 at 21:03 |

    @Mikael
    I agree with you, context matters. Even just seeing the movie together with Bob could make it more enjoyable.

    People out-perform machines in many aspects of information retrieval, including recommendations. Machines excel as numbers grow.

  7. Daniel Tunkelang
    Daniel Tunkelang March 11, 2009 at 21:38 |

    If Netflix could read my mind and predict my tastes, I’d trust it. I just don’t think anyone or anything can do that. To take more extreme examples, we don’t trust machines or even our friends to reliably recommend, job candidates, or jobs. At best, they offer up plausible candidates whom we then inspect carefully–because we aren’t persuaded by black-box recommendations.

    Perhaps I’m unusual, but I don’t trust machines to understand what I want, and my experience has only reinforces that distrust. But I’m more receptive to a machine that doesn’t try to be too clever and instead offers explanations for its leaps of inference. Just as I’m willing to explain to people when they misjudge me, I’m willing to explain to machines. But changing the rating on a recommendation with no further explanation communicated in either direction hardly feels like communication–and the effect is too indeterminate to be persuasive.

  8. Vegard Sandvold
    Vegard Sandvold March 11, 2009 at 23:27 |

    @Daniel
    I actually trust my friends to make reliable recommendations about quite serious things such as bank loans and car purchases. Perhaps I’m lazy (or just very normal), but I embrace most opportunities to avoid doing tedious research into topics I have little interest in. Haven’t killed me so far :-)

    I get a desire to take this conversation in a different direction when you mention communication, but I’ll stick to the subject. I checked out some movie recs on http://www.clerkdogs.com today. They have some nifty graphs explaining the match criteria. You can find them by clicking the small “dot” button under the large poster of “Requiem For a Dream”.

    http://www.clerkdogs.com/movies/1615-The-Fountain-2006/matches

    Personally I get nothing out of these graphs. They’re of no use to me as decision criteria. I like the short comparisons under each small poster much better. Sunshine is faster-paced than The Fountain, 12 Monkeys is quirkier etc. This kind of pair-wise comparison is simple and easy to grasp. I love it. And I understand that this is transparency. Point taken.

  9. Daniel Tunkelang
    Daniel Tunkelang March 11, 2009 at 23:59 |

    Re bank loans and car purchases: I suspect the recommendations you’re looking for there are for binary decisions about whether to trust someone not to scam you. I personally like to make those decisions based on track record or proxies thereof, e.g., I prefer sushi places that have been around long enough that there’s been time for someone else to get food poisoning. But I can see how basic trust in someone’s integrity is hard to communicate transparently, and that we may be better off simply relying on transitivity of trust.

    Re Clerkdogs: I think we agree 100% there. So let’s talk about communication. :-)

Comments are closed.