Daily Banking News
$42.39
-0.38%
$164.24
-0.07%
$60.78
+0.07%
$32.38
+1.31%
$260.02
+0.21%
$372.02
+0.18%
$78.71
-0.06%
$103.99
-0.51%
$76.53
+1.19%
$2.81
-0.71%
$20.46
+0.34%
$72.10
+0.28%
$67.30
+0.42%

Prof. Julia Stoyanovich, Director of the Center for Responsible AI – Interview


Julia Stoyanovich, is a professor at NYU’s Tandon School of Engineering and founding Director of the Center for Responsible AI. She recently delivered testimony to the NYC Council’s Committee on Technology about a proposed bill that would regulate the use of AI for hiring and employment decisions.

You are the founding Director of the Center for Responsible AI at NYU. Could you share with us some of the initiatives undertaken by this organization?

I co-direct the Center for Responsible AI (R/AI) at NYU with Steven Kuyan. Steven and I have complementary interests and expertise.  I’m an academic, with a computer science background and with a strong interest in use-inspired work at the intersection of data engineering, responsible data science, and policy.  Steven is the managing director of the NYU Tandon Future Labs, a network of startup incubators and accelerators that has already had a tremendous economic impact in NYC.  Our vision for R/AI is to help make “responsible AI” synonymous with “AI”, through a combination of applied research, public education and engagement, and by helping companies large and small – especially small – develop responsible AI.

In the last few months, R/AI has actively engaged in conversations around ADS (Automated Decision Systems) oversight.  Our approach is based on a combination of educational activities and policy engagement.

New York City is considering a proposed law, Int 1894, that would regulate the use of ADS in hiring through a combination of auditing and public disclosure.  R/AI submitted public comments on the bill, based on our research and on insights we gathered from jobseekers through several public engagement activities.

We also collaborated with The GovLab at NYU and with the Institute for Ethics in AI at the Technical University of Munich on a free online course called “AI Ethics: Global Perspectives” that was launched earlier this month.

Another recent project of R/AI that has been getting quite a bit of attention is our “Data, Responsibly” comic book series.  The first volume of the series is called “Mirror, Mirror”, it’s available in English, Spanish, and French, and accessible with a screen reader in all three languages.  The comic got the Innovation of the Month award from Metro Lab Network and GovTech, and was covered by the Toronto Star, among others.

What are some of the current or potential issues with AI bias for hiring and employment decisions?

This is a complex question that requires us to first be clear what we mean by “bias”.  The key thing to note is that automated hiring systems are “predictive analytics” — they predict the future based on the past.  The past is represented by historical data about individuals who were hired by the company, and about how these individuals performed.  The system is then “trained” on this data, meaning that it identifies statistical patterns and uses these to make predictions.   These statistical patterns are the “magic” of AI, that’s what predictive models are based on.  Clearly, but importantly,  historical data from which these patterns were mined is silent about individuals who weren’t hired because we simply don’t know how they would have done on the job that they didn’t get.  And this is where bias comes into play.  If we systematically hire more individuals from specific demographic and socioeconomic groups, then membership in these groups, and the characteristics that go along with that group membership, will become part of the predictive model.  For example, if we only ever see graduates of top universities being hired for executive roles, then the system cannot learn that people who went to a different school might also do well.  It’s easy to see a similar problem for gender, race, and disability status.

Bias in AI is much broader than just bias in the data.  It arises when we attempt to use technology where a technical solution is simply…



Read More: Prof. Julia Stoyanovich, Director of the Center for Responsible AI – Interview

Get real time updates directly on you device, subscribe now.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.