Google and the AI for dummies

Google first unveiled Google Assistant at its Google I/O conference in May 2016, pitching the new virtual assistant as an improvement of the experience in Google Now, as well as an expansion of Google’s existing “Ok Google” voice controls.

At the October event “Made by Google” some devices (https://madeby.google.com/) showed how the Assistant will be embedded: phones, wearables, cars, computers and in our homes. This last device called with fantasy Google Home (https://madeby.google.com/home/) is the most worrying because “…With your permission, Google Home will learn about you and get personal…” meaning that we will send to Google a constant flow of information: every discussion, every TV program, every person coming will be known.

If we add this launch with the equally recent “Partnership on AI” (https://www.partnershiponai.org/) that see as partecipant also Amazon, Facebook, Ibm and Microsoft “…to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society…” we can realize that AI is no more a sexy word for clever scientist or Sci-Fi writers but a clear business mainstream with which our Banking clients cannot do without.

If the digital champions (in the list we add also Apple and Musk) are now ready to embedd the AI into their services/products what are doing the Banks on this? We know for sure that player are thinking how to leverage this relatively new technology but are still in the lab phase without any market launch (at this moment) of any super-powered service.

One very interesting area of application is the enhancement of Data Quality frameworks that they have implemented / are implementing; as we read in a real interesting paper from NASA (Automated Data Quality Assesment in the Intelligent Archive by NASA)  “…to be considered “intelligent”, the data architecture of the future should operate effectively with minimal human guidance, anticipate important events, and adapt its behavior in response to changes in data content, user needs, or available resources. To do this, the intelligent archive will need to learn from its own experience and recognize hidden patterns in incoming data streams and data access requests..”. This tournaround will be very relevant to ensure efficacy and efficiency of the pile of investments done and planned on Data Quality…are you thinking about it?