Bringing the Security Analyst into the Loop: From Human-Computer Interaction to Human-Computer Collaboration

Share Share Share Share Share
[s2If !is_user_logged_in()] [/s2If] [s2If is_user_logged_in()]

Putting Together the Pieces of the Puzzle

At the forefront of the minds of the two designers tasked with creating a new knowledge graph visualization was the desire to create something that would help analysts “connect the dots” so that they could tell the story of what had happened. Both designers recognized that the previous visualization, while technically correct, was not very consumable nor did it meet the goals the team had for themselves for designing for AI:

“We’re the kids with a messy room when we create products. Something that may seem chaotic or out-of-place to our users doesn’t seem so crazy because we created it. We live in this room, in our products. But we need to create something consumable, constructive, and structured when it comes to data visualization and bringing forward explainability and transparency from artificial intelligence.” – Advisor Designer

Designers use metaphors to explain the new and unfamiliar in terms that people – users – understand. If the current visualization of the Advisor knowledge graph brings to mind the complexity of the Internet and the “black box” nature of AI, what then is an appropriate metaphor for a new visualization, wondered the designers.

After much experimentation, Advisor designers landed on a metaphor closer to how security professionals themselves explain their process and what it is that they do — a puzzle. Puzzles are composed of lots of pieces, some of which fit together, others that don’t, and still others that might be missing. Their job, the designers explained, was to present analysts with all of the pieces of the puzzle that were available (e.g., the rule that triggered the offense, user and asset information, threat intelligence, malicious observables) and let analysts “fill in the empty gaps.”

Using this metaphor, Advisor designers produced several different concepts, one of which featured the use of four “swim lanes.” See Figure 6. This visualization of knowledge graph data addresses the primary reason why so many security analysts using knowledge graphs find them so very difficult to interpret, namely the absence of a structured flow through the nodes and edges. With traditional visual representations of a security incident knowledge graph, there really is no easy way to follow the path from the source of the incident to the possible threat, due to the many interrelated branches.

In contrast to existing visualization, this new way of visualizing a knowledge graph reduces complexity by clustering related entities together. Related entities that can be clustered together are determined not only by the type of the entities, but also by the threats impacting them. The new graph representation also provides an easy-to-follow path starting from the source of the security incident – typically a user or an internal asset or an external entity – and leading to the threat that allows the security analyst to quickly identify how the security breach proceeded through their network. And, finally, it reduces the clutter of the old diagram by allowing security analysts to selectively expand clusters they would like to see more details on.

fig06

Figure 6: Proposed Knowledge Graph Visualization. Source: IBM QRadar Advisor with Watson product team.

In effect, this new diagram quickly provides analysts with the answers to their questions by mimicking their workflow and aligning with their mental model of how attacks work. The diagram makes clear what the source of the offense and attack is and where the analyst should start the investigation. Also made explicit are the internal assets that are involved in the security incident. The diagram also identifies any external connections that were made to any suspicious or malicious URLs or IP addresses, and clearly calls out if network security devices did or didn’t block the threat. Payload information is available from within the diagram, as is additional information about all of the entities and their relationships to each other. Lastly, the type of threat and its success or failure in compromising the network is clearly communicated.

With this new visualization, the Advisor team provides analysts with all the puzzle pieces they need to make a quick assessment if an offense represents a false positive or a real threat.

RESEARCH IMPACT

After the ethnographic research and workshop, the Advisor team worked closely together with security leaders and analysts to develop a KG visualization that met the agreed upon goal of “an analyst can view what really happened in their network for a potential incident and complete triage more efficiently.” Interestingly, both security analysts and leaders appreciate the new diagram and for similar reasons.

“The new concept would absolutely be easier to determine if it is a false positive or if something needs to be looked into more or escalated. It’s much easier for us to see the flow of what was going on.” — Security Analyst

“It has the [data] structure, the involved information, and clear definitions of the types of connections and assets.” — Security Analyst

“Honestly, I really appreciate the way the information is organized on this graph. It’s A LOT cleaner. We have had many offenses when the investigation will have several hundred IPs on it, and it’s just almost impossible to easily glean important information out of those. There’s just so much clutter on them.” — Security Leader

It is true that some security leaders asked if it was possible for the Advisor team to support both diagrams. When told “no,” however, security leaders opted for the new diagram, undoubtedly in part because of their own background as analysts.

A new version QRadar Advisor with Watson complete with the new KG visualization will be released in Q1 2020 by IBM. Time will tell if the new graph diagram will increase usage and sales, but the team (including upper management) remains confident that they made the right choice. This certainty is, in large part, due to the research that drove the decision to work on a new knowledge graph visualization and research that validated the preference for the new diagram.

The design team’s work on Advisor has also had an impact on how teams are approaching designing for AI at IBM. The design team regularly consults with teams across the business on how it arrived at the user-centered goals that drove the development of a new AI-powered experience. In addition, the graph has been adopted as a component in IBM’s open-source design system and is currently being reviewed by IBM as a patent application submission.

CONCLUSION

People, in general, are conflicted about AI. According to one dominant narrative, humans will likely experience a future made more productive and efficient by Artificial Intelligence. Counter-narratives, however, predict a different kind of future – one in which humans become less autonomous and in control of their lives and are incapable of making decisions and taking action independent of AI tools and technologies. The customers and users of QRadar Advisor with Watson are no different. They believe in the power of AI to advise them of what they don’t know and what they should do, but they also question the ability of – and their desire for – a tool, any tool, to replace them, the human element.

Like our users, AI enterprise solution product teams are conflicted. They believe in the power of the AI products they are designing to benefit the lives of users, yet they also recognize that they are developing products whose goal is to reduce the need for human effort (or what are wistfully thought of as “lower order” tasks and skills).

Humans are – and always will be – a necessary part of the equation. Humans are not just consumers but active producers of the insights that AI models produce. Humans are the agents that create the interfaces and visualizations that people use to interact with AI models and AI-generated insights.

In exploring how the Advisor team came to the decision to replace one KG visualization with another, this case study demonstrates just how entangled humans and technology can be. It also suggests that AI agency and autonomy are less of a threat to humans and human agency than certain parties would suggest. Could it be that Artificial Intelligence is really more of a neutral force whose exact influence shapes and is shaped by humans engaged with it.

That change will occur with the introduction, adoption, and adaptation of new technologies is certain. The exact nature of this change is not, however. In challenging the way in which AI-powered insights are represented to analysts and proposing a solution that better aligns with their own mental models, the Advisor team undermined the notion that humans have no role in the future or expression of AI. It took people – designers, engineers, offering managers, security analysts, and security leaders committed to developing a product that users could use and get value from – to find a way to present the information in way that was consumable and, in the process, reveal the co-constitutive nature and required human element of AI. In so doing, they call attention to the ways in individuals can challenge the trope of AI as the harbinger of a future in which individuals are made more productive yet less autonomous.

Recognizing humans and nonhumans as partners in a symbiotic relationship challenges the concept of “human-computer interaction.” Designing from a shared agency perspective means that product teams must consider the interdependence of humans and nonhuman actors and design for two entities. As Farooq and Grudin (2016: 32) argue, “The essence of a good partnership is understanding why the other person acts as they do. Understanding the intricate dance of a person with a software agent requires longitudinal or ethnographic approaches to produce realistic scenarios and personas.”

Liz Rogers is a design research practice lead with IBM Security. After receiving a PhD in cultural anthropology from the University of Wisconsin – Madison, she entered the world of product innovation and never looked back. She has over 18 years of experience working in the design industry, helping teams design compelling products and experiences, based on a deep understanding of user needs, motivations, and behaviors.

Citation: 2019 EPIC Proceedings pp 341–361, ISSN 1559-8918, https://www.epicpeople.org/epic

NOTES

Acknowledgments – I want to express my gratitude for the support, encouragement, and thoughtful engagement provided by Patti Sunderland, the curator for this paper. Without her, this paper simply would not exist. I also want to thank Terra Banal and Andi Lozano, the two designers on the Advisor team. Both are amazing designers and people. Without them, there would be no new knowledge graph visualization, nor would the research have been as successful as it was. Lastly, I’d like to thank the entire Advisor team, as well as the security analysts and leaders who helped us design the new knowledge graph visualization. I can’t wait to see what happens.

1. In smaller security organizations, it is not uncommon for one individual to cover multiple roles, including security leader, security analyst, incident responder, and threat intelligence analyst. Larger, more mature security teams typically distinguish between these roles with differing degrees of granularity. Each of these roles can be identified in multiple ways. A simple search using a website like Indeed.com brings up multiple ways to identify the people who take on the responsibilities and tasks associated with “security leaders” – e.g., creating, implementing, and overseeing the policies, procedures, and programs designed to limit risk, comply with regulations, and protect the company’s assets from both internal and external threats – like Chief Security Officer, VP of Security and Risk Manager, and IT Risk Management Director. Similarly, people whose top jobs to be done include protecting company assets against tools of attack and attacker, detecting the occurrence of cybersecurity events, and investigating the activities and presence of attackers include people working as SOC Analysts, Information Security Analysts, and Security Engineers. For the sake of clarity and simplicity, in this paper, individuals who perform similar tasks and have the same goals, pain points, and needs in performing these tasks are all referred to by a common title, in this case “security analyst” or “security leader.”

REFERENCES

boyd, dana and Kate Crawford
2012     Critical Questions for Big Data, Information, Communication and Society. DOI: 10.1080/1369118X.2012.678878

Capgemini Research Institute
2019     AI in Cybersecurity Executive Survey. https://www.capgemini.com/wp-content/uploads/2019/07/AI-in-Cybersecurity_Report_20190711_V06.pdf

Cylance
2018     How Artificial Intelligence Is Liberating Security. https://www.blackhat.com/sponsor-posts/06252018.html. Accessed August 27, 2019.

Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security Electronic Frontier Foundation, and OpenAI
2018     The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

Klinger, Ulrike and Jakob Svensson
2018     The End of Media Logics? On Algorithms and Agency. New Media & Society 2018 (12).

Leonardi, Paul
2011     When Flexible Routines Meet Flexible Technologies: Affordance, Constraint, and the Imbrication of Human and Material Agencies. MIS Quarterly 35 (1): 147-167.

Moss, Emanuel and Friederike Schuur
2018     How Modes of Myth-Making Affect the Particulars of DS/ML Adoption in Industry. Ethnographic Praxis in Industry Conference Proceedings 2018: 264-280.

Neff, Gina & Nagy, Peter.
2016     Talking to Bots: Symbiotic Agency and Case of Tay. International Journal of Communication 10: 4915-4931.
2018     Agency in the Digital Age: Using Symbiotic Agency to Explain Human–Technology Interaction. DOI: 10.4324/9781315202082-8.

Orlikowski, Wanda J.
2007     Sociomaterial Practices: Exploring Technology at Work. Organization Studies 28 (09): 1435 – 1448. DOI: 10.1177/0170840607081138

Ortner, Sherry
1973     On Key Symbols. American Anthropologist. New Series. 75 (5): 1338 – 1346.

Pew Research Center
2018     Artificial Intelligence and the Future of Humans.

Pfaffenberger, Bryan
1988     The Social Meaning of the Personal Computer: or, Why the Personal Computer Revolution Was No Revolution. Anthropological Quarterly 61 (1): 39 – 47.
1992     Social Anthropology of Technology. Annual Review of Anthropology 21: 491-516.

Prescott, Andrew

2019     What the Enlightenment Got Wrong about Computers. https://dingdingding.org/issue-2/what-the-enlightenment-got-wrong-about-computers/

Rose, Jeremy and Matthew Jones
2005     The Double Dance of Agency: A Socio-Theoretic Account of How Machines and Humans Interact. Systems, Signs & Actions 1 (1): 19-37.

SANS Institute
2019     Investigating Like Sherlock: A SANS Review of QRadar Advisor with Watson

Seaver, Nick
2017     Algorithms as Culture: Some Tactics for the Ethnography of Algorithmic Systems. Big Data & Society. July- December 2017: 1-12.
2018     What Should an Anthropology of Algorithms Do? Cultural Anthropology 33(3): 375.

[/s2If]

Pages: 1 2 3 4

Leave a Reply