Published on the 15/05/2019 | Written by Heather Wright
AI ethics is off the agenda, but Google digs deep for AI fund…
An Australian project to develop an AI-driven national suicide monitoring system has garnered US$1.2 million from Google as part of the tech giant’s push for AI to be used for social good.
The Turning Point project is one of 20 AI-based projects globally, across health, misinformation, economic opportunity, environmental protection, conservation, education and crisis and emergency response, sharing US$25 million in grants.
40 percent of the applications came from organisations with no previous AI experience.
Other winning projects include initiatives to apply machine learning to weather and agricultural data to improve irrigation for resource-strapped farmers in Africa and the Middle East, using satellite imagery to detect illegal mines, and using natural language processing and ML to extract and connect information in case documents to enable human rights lawyers to better research and defend cases.
The funding, via Google’s charitable arm Google.org, comes just a month after Google dumped its AI ethics board just 10 days after forming it following a myriad of problems, including a petition signed by thousands of employees wanting the removal of one board member, citing her anti-LGBTQ and anti-immigrant views. The petition noted the inclusion of Kay Cole James – president of the conservative think tank Heritage Foundation – ‘significantly undermines Google’s position on AI ethics and fairness’.
The inclusion of the CEO of a drone company also raised concerns. Google has previously faced flak for a controversial contract with the Pentagon which saw it using AI to analyse drone footage. Google decided not to renew the contract after facing heavy protest from employees.
The winners of the AI Impact Challenge are unlikely to face the same flak, however, ranging from pest control tracking and analysing in India and deep learning for bioacoustics monitoring and commonplace mobile tech to track rainforest health, to vaccine viability projects and a fact checking project developing trend monitoring and clustering tools to aid fact checkers’ analysis.
An Indonesian project is building an image recognition tool to improve plastic recycling rates, reduce ocean plastic pollution and strengthen waste management in under resourced communities.
There’s even a grant for the United States’ Trevor Project, using natural language processing and sentiment analysis to determine a LGBTQ youth’s suicide risk level to better tailor services for individuals seeking health.
The grantees were selected from more than 2,600 applications from 119 countries.
Jacquelline Fuller, Google.org president, says 40 percent of the applications came from organisations with no previous AI experience.
No doubt with that in mind, the recipients – who Fuller says were thoroughly vetted and chosen based on feasibility potential for impact, scalability and the responsible use of AI – will also receive mentoring from Google AI experts, along with credit and consulting from Google Cloud and the chance to join a customised accelerator program from Google Developers Launchpad.
Turning Point says its project will use AI methodologies to streamline coding of national ambulance suicide related attendance data. “The resulting data would play a central role in informing public health prevention, policy and intervention, as well as identifying emerging trends, hidden populations and geographic hotspots for targeted responses relating to suicide,” Turning Point says.
“Suicide rates are unfortunately continuing to rise in Australia and around the world. This grant from Google gives us the opportunity to undertake a project that has huge potential to make a positive impact, and we are incredibly grateful for their generous commitment to supporting this work,” Turning Point director and Monash University professor of addiction studies and services, Dan Lubman says.