Welcome to the 1st edition of Gradient Ascent. I’m Albert Azout, a prior entrepreneur and current Partner at Cota Capital. On a regular basis I encounter interesting scientific research, startups tackling important and difficult problems, and technologies that wow me. I am curious and passionate about machine learning, advanced computing, distributed systems, and dev/data/ml-ops. In this newsletter, I aim to share what I see, what it means, and why it’s important. I hope you enjoy my ramblings!
Is there a founder I should meet?
Send me a note at albert@cotacapital.com
Want to connect?
Find me on LinkedIn, Angelist, Twitter
And today’s concept cloud…*
I read an interesting ACM paper co-authored by one of my favorite researchers, Jon Kleinberg (I recommend his book, Networks, Crowds, and Markets: Reasoning about a Highly Connected World). The paper proposes that we consider designing AI algorithms that are aligned with human behavior — these algorithms should predict, enhance, and/or detect mistakes in human decision making. This alignment is especially relevant in situations where AI is focused on augmenting human-in-the-loop workflows and where explainability and interpretability is necessary (i.e. high-risk situations).
Modern AI algorithms are human bio-inspired, but the similarity seems to ends there. To the extent that AI models focus on optimizing absolute performance objectives (i.e. classification loss), irrespective of human factors, they do not necessarily concern themselves with their fidelity to the outputs of human decisions.
“Such AI systems tend to have natural one-dimensional parameterizations along which their performance monotonically increase.”
Examples of these one-dimensionality parameterizations are, for instance, (1) the amount of data used to train the system or (2) the amount of time spent in computation. The result of such parameterization is that AI algorithms are not adept at tuning themselves to align to human skill levels (i.e. attenuating the algorithm by reducing training data does not translate to an algorithm with reduced human skill). The authors proceed to develop a chess-playing system, called Maia, which has a natural parameterization that can be targeted to predict human chess moves at a particular levels of skill (and also predict mistakes and blunders).
As I was reading this paper, I had a few questions/thoughts:
What is the ultimate goal of an AI algorithm aimed at human skill alignment? Is it to embody human skills irrespective of their correctness? Or is the idea to eventually augment human skill using with same scale of measurement.
Are these aligned algorithms assumed to be any more interpretable than non-skilled-aligned algorithms set at the same level of performance? Just because two people arrive at the same conclusion does not imply that their reasoning is identical.
Presumably, if you could increase the skill level of such an AI system indefinitely, wouldn’t you end up with a solution for the network that is inevitably optimized for absolute performance (in this case super-human skill at some task)?
Or does human skill have some deeper meaning?
I then lost some sleep thinking about what human-in-the-loop actually means. As the term is used today, I think it refers to the outer loop of a training process, where humans are explicitly or implicitly labeling/annotating training data. But in a human-machine collaborative situations, the coupling has to be far tighter. There likely needs to be a communication and exchange of concepts and semantics both ways. Skill being only one dimension. Others dimensions can include empathy, fairness, curiosity, etc. I could certainly see the benefits of an intelligent systems with such properties and what that would mean for the ways in which these systems need to be architected.
What do you think?
Some interesting companies I have run into, focused on this subject area:
Perceptive Automata - predicting human intention for autonomous vehicles
Veo Robotics - enable standard industrial robots with the ability to perceive their environments for human safety
Arthur.ai - model monitoring for AI, including explainability
*When researching topic areas I have been using a neat mind mapping software called XMind. I will include a concept cloud in all my posts.
Disclosures
While the author of this publication is a Partner with Cota Capital Management, LLC (“Cota Capital”), the views expressed are those of the writer author alone, and do not necessarily reflect the views of Cota Capital or any of its affiliates. Certain information presented herein has been provided by, or obtained from, third party sources. The author strives to be accurate, but neither the author nor Cota Capital do not guarantees the accuracy or completeness of any information.
You should not construe any of the information in this publication as investment advice. Cota Capital and the author are not acting as investment advisers or otherwise making any recommendation to invest in any security. Under no circumstances should this publication be construed as an offer soliciting the purchase or sale of any security or interest in any pooled investment vehicle managed by Cota Capital. This publication is not directed to any investors or potential investors, and does not constitute an offer to sell — or a solicitation of an offer to buy — any securities, and may not be used or relied upon in evaluating the merits of any investment.
The publication may include forward-looking information or predictions about future events, such as technological trends. Such statements are not guarantees of future results and are subject to certain risks, uncertainties and assumptions that are difficult to predict. The information herein will become stale over time. Cota Capital and the author are not obligated to revise or update any statements herein for any reason or to notify you of any such change, revision or update.