Speaker(s): Mike Burgess, Director-General ASD
Date of Speech: 2 July 2018
Body:
Thank you, Fergus. And good afternoon everyone. It’s my pleasure to be here at the ASPI artificial intelligence master class. I thought I’d share with you today, my perspectives on AI from an intelligence and security agency’s view point.
I must also declare up front, I am an enthusiastic amateur, when it comes to this topic. My passion is grounded in the successful application of machine learning in cyber security and the exploration of neural networks in my earlier years.
Before I continue, let me introduce myself and the Australian Signals Directorate.
With the exception of the last five years, I have been involved in Defence all of my working life – starting out as an electronics engineer in Adelaide. I’m now back home in Defence after receiving an offer too good to refuse.
Having returned to ASD, I’ve noted one very important thing about the organisation that hasn’t changed since I left. That is, my team’s unwavering commitment to delivering on mission.
A mission that, in the words of ASD’s values, operates in the slim area between the difficult and the impossible. In part, our commitment to mission comes from a clear recognition from everyone, that in this unstable and challenging world, there is a critical need for the sorts of high-quality intelligence, leading cyber security advice and real-world effects that only ASD can deliver. And all of which, are delivered for impact.
It also comes from 70 years of culture, born out of the dark days of World War 2, when timely, high quality signals intelligence often made the difference between victory or defeat. But in a large part, the culture comes from the kind of people we seek to employ and retain.
Some of the best and brightest across several generations, from all walks of life, not just engineers - including a large chunk of frighteningly clever millennials.
Gaining the ability to flexibly recruit, train and retain our specialist staff is one of the major reasons why ASD became a statutory agency yesterday. Our mission runs from providing intimate support to military operations through to countering terrorism, countering transnational crime and identifying and countering cyber threats that challenge the security, prosperity and personal freedoms that underpin our rich and vibrant society.
ASD’s purpose is to defend Australia from global threats and help advance Australia’s national interests. We do this by mastering technology, and the application of technology to inform, protect and disrupt:
- Informing by covert acquisition of foreign information not publicly available (known in our business as SIGINT, signals intelligence)
- Protecting, by comprehensively understanding the cyber threat, providing proactive advice and assistance to improve the management of cyber risk by government, business and the community, and
- Disrupting, by applying our offensive cyber capabilities offshore, to support military operations, counter terrorism, counter cyber espionage and serious cyber enabled crime.
What we do is very hard, and we must continue to operate in that slim area between the difficult and impossible to be successful.
Our people are key to mastering technology and its application. And our people are key to mastering artificial intelligence and its application to ASD’s intelligence and security mission.
Today, we live in a technology-enabled, connected world. With this comes great opportunity and benefits to society and our economy.
Everything is being digitised, everything is being connected and everything is being controlled by software. And there is no doubt, the full potential of connectivity, technology and software are yet to be fully realised. However, these same benefits represent a significant risk.
Given my perspectives today are focused on the intelligence and security aspects of AI, its important I tell you that intelligence and security types break security down to include confidentiality, integrity and availability. This breakdown is relevant to ASD’s mission to inform, protect and disrupt.
So, what is our perspective on AI? Firstly, there is much hype around this topic and I suggest much of its unhelpful given the importance of AI. But my mum taught me if you don’t have anything good to say, then don’t say anything. I will therefore refrain from dwelling on the hype.
That said, the uncertainty or confusion around this topic may well be understandable.
I was recently listening to the 'AI with AI' podcast, when the presenter, Andy Ilachinski, commented on the hype and was asking if AI was more alchemy than science? Not in the sense of turning lead into gold. Rather, suggesting we are at a stage, similar to the early development and understanding of material science, when we knew something worked, but we didn’t fully understand the why. I have sense Andy is right.
As I said at the start, I am an enthusiastic amateur, but I am not an expert by any means. However, I am taken by the potential of AI – both its benefits and downside risks. In its simplest form, AI is a computer program running on a CPU.
At ASD, the definition of AI doesn’t really matter but the impact and effects do. To us, AI includes machine learning, data mining, data science, automation and many more techniques that help our people to operate in the slim area between the difficult and the impossible. This help is needed as our world is far more complex today than it has ever been.
Data volumes, complexity of problems, advanced threats and adversaries coupled with a workforce that can’t scale to the task at hand. AI will help ASD meet these challenges and allow more time to be spent on the higher end problems.
AI will present its challenges, not just for ASD. It is one thing to have a computer offer a set of decisions for the analyst to make a decision from. It is another thing to have a computer execute automatically and informing the human, allowing it to be overridden. Or perhaps, a computer deciding everything, acting autonomously and ignoring the human!
In a world where everything is digital, everything is connected, and everything is controlled or managed by software – this last version of AI is a little scary. Trust will be required and very much tested.
Is this currently more alchemy than science? Well I’d say more science is needed, more research and development and ASD is and will be investing further in this topic. AI is not a new thing - today autopilots fly aeroplanes and we all trust that. Of course, in this example, the human has the ability to override the autopilot when or if needed.
So, what is the security challenge ahead? For the last 10 years, the security world has been focused on dealing with the problem of wholesale theft of data. As the full potential of technology, connectivity and software are further realised, I think it is time we turn our mind to integrity and availability. Sure, AI can help with protecting data theft – I’ve been involved in unsupervised machine learning being successfully used to find the unknown knowns.
Machine learning software can identify anomalies in network data far better than a human can, especially when you consider the amount of data involved. AI is not just an issue in the security space – AI will present legal, moral and ethical dilemmas requiring much thought and debate.
What are the consequences if we let this run so far that we become totally dependent on technology, connectivity and software, including AI? In 2018, I think it is fair to say, we are already dependent on connectivity, at least our children are.
Technology, connectivity and software including AI holds much promise, but we shouldn’t just look at the benefits – what are the vulnerabilities, what are the risks? This will require my agency to do what it has done for the last 70 years. As both a poacher and gamekeeper, we know that offence informs defence and defence informs offence.
It is right the world embraces technology, connectivity and software. It is right the world embraces AI. But we must embrace this with our eyes wide open.
We shouldn’t sleep walk into this, where we suddenly find ourselves in a software-controlled world, where few understand how it all works. How then could you be assured the right decisions are being made? And if not, do you then have the ability to challenge a decision or outcome and change it were necessary?
Humanity should determine where AI begins and AI ends. How much of our world would we outsource to AI? How much of our brain power and decision-making would we hand over? ASD will use AI to maintain our capability edge in order to defend Australia from global threats and help advance our national interests. In the spirit of transparency, there wouldn’t be an intelligence agency on this planet that wouldn’t be thinking about how AI can be exploited, ASD included.
We are both a poacher and a gamekeeper. Our strength and capability come from mastering technology and its application. And, from our offensive capability and tradecraft informing our defensive posture and vice versa.
Let’s embrace AI, but with our eyes wide open. Well done to ASPI for hosting this important event. Thank you. I’d be pleased to take your questions or challenges.