|
Sasha Romanosky, PhD I research topics on the economics of security and privacy, AI, cyber crime, cyber insurance, and national security. I am a Senior Policy Researcher at the RAND Corporation, a faculty member of the Pardee RAND Graduate School, and an affiliated faculty in the Program on Economics & Privacy at the Antonin Scalia Law School, George Mason University. My research has appeared in journals such as the Journal of Policy Analysis and Management, the Journal of Empirical Legal Studies, the Journal of Cybersecurity, the Journal of National Security Law and Policy, the Berkeley Technology Law Journal, International Journal of Intelligence and CounterIntelligence, and ACM's Digital Threats: Research and Practice (DTRAP). I was also appointed to DHS's Data Privacy and Integrity Committee (DPIAC), which advised the Secretary of Homeland Security and DHS's Chief Privacy Officer on policy, operational, and technology issues.
|
![]() |
![]()
My research is motivated by cybersecurity and ways to understand and mitigate cyber risks for network defenders and policymakers. Specifically, my efforts concern the following areas: AI risks, cyber insurance, cybercrime, and software vulnerability scoring.
Managing Vulnerabilities in AI Systems.
Generative AI tools (including LLMs) are showing amazing capabilities for personal and professional uses. However, they also present unique risks due to the stochastic nature of the transformer and neural network. These systems produce vulnerabilities that — unlike typical software vulnerabilities — are fundamentally unpatchable, such as jailbreaking, direct and indirect prompt injections, and other vulnerabilities that enable evasion and extraction attacks. I have a great interest in understanding these vulnerabilities, and developing ways to assess and manage their risks. In effect, this is about building a Vulnerability Management framework for AI systems.
In addition, these tools have the capability to change the cyber offense-defense balance. On one hand, they may be able to find vulnerabilities at scale, while on the other hand, they may also be able to autonomously exploit each of these vulnerabilities. While AI systems aren’t quite capable of this at scale, my research seeks to better capture and track these capabilities, which may help serve as early warning systems to inform policymakers, developers, and users.
Cyber Insurance.
Cyber insurance is such an interesting field, in part because of the evolving nature of the attack surface (more applications, with more vulnerabilities, and more connected devices), as well as an evolving set of threat actors developing new techniques to exploit victim networks. Together this creates more opportunities for attritional (day to day) and catastrophic cyber incidents. Understanding the role of cybersecurity controls, as well as policy interventions to both reduce and manage losses from these events is becoming increasingly important. For example, a pressing issue is the role of the federal government in facilitating an insurance response that can be invoked to ensure continuity of both the private economy and government functions. See here, here, and here.
Cybercrime.
I built a semi-automated pipeline that can identify the universe of federal crimes, collect their docket filings, and apply natural language processing (NLP), network analysis, and regression methods to understand the features and communities of cases and related charges. I applied this pipeline to federal cyberstalking cases and our research team was able to identify some wonderful insights. See here and here.
Software Vulnerability Scoring.
I am also very proud to be part of volunteer and standards-building efforts to study software vulnerabilities and develop tools to measure their severity and exploitation. For example, I am one of the original authors of the Common Vulnerability Scoring System (CVSS) in the early 2000s, which has long been an international standard (ITU X.1521). See https://www.first.org/cvss for more information.
In addition, I am one of the creators of the Exploitation Prediction Scoring System, EPSS. In recent years, it became apparent that CVSS was a poor measure of real-world exploitation. That limitation led us to build an entirely data-driven, machine-learning model for estimating the probability of any vulnerability being exploited in the wild. Much as with other standards like CVE, CWE, and CVSS, EPSS filled a specific gap, and I’m happy to see it quickly gaining wide adoption. See https://ieeexplore.ieee.org/document/10190703. Also, for anyone interested in learning more about exploitation or contributing to this standard, please join the working group at https://www.first.org/epss/.
![]()
Working Papers and Papers in Review
Book Publications
Industry Publications and Op-Eds
Conference and Workshop Presentations, and Panel Discussant

This figure plots CVSS and EPSS scores for a sample of vulnerabilities. First, observe how most vulnerabilities are concentrated near the bottom of the plot, and only a small percent of vulnerabilities have EPSS scores above 50% (0.5). While there is some correlation between EPSS and CVSS scores, overall, this plot provides suggestive evidence that attackers are not only targeting vulnerabilities that produce the greatest impact, or are necessarily easier to exploit (such as for example, an unauthenticated remote code execution). This is an important finding because it refutes a common assumption that attackers are only looking for — and using — the most severe vulnerabilities. And so, how then can a network defender choose among these vulnerabilities when deciding what to patch first? CVSS is a useful tool for capturing the fundamental properties of a vulnerability, but it needs to be used in combination with data-driven threat information, like EPSS, in order to better prioritize vulnerability remediation efforts.

The figure shows actual exploit observations for a sample of vulnerabilities. Each row represents a separate vulnerability (CVE), while each blue line represents an observed exploit. The red dots represent the time of public disclosure of the CVE. (Note that we are not tracking whether these exploits are successful or not.) While it is difficult to draw conclusive insights from these behaviors, we can comment on general characteristics. First, simply viewing these data is interesting because they provide a novel view into real-world exploit behavior. Indeed, it is exceedingly rare to see these kinds of data publicly available, and we are fortunate to be able to share them with you. It is also thought-provoking to examine and consider the different kinds of exploit patterns, such as:
![]()
Common Vulnerability Scoring System (CVSS)
I am one of the original authors of CVSS, and have been working on it since 2003. Please see FIRST.ORG for a full description of the current standard.

Currently, corporate IT management must identify and assess vulnerabilities for many disparate hardware and software platforms. They need to prioritize these vulnerabilities and remediate those that pose the greatest risk. But when there are so many to fix, with each being scored differently across vendors, how can IT managers convert this mountain of vulnerability data into actionable information? The Common Vulnerability Scoring System (CVSS) is an open framework that addresses this issue. It offers the following benefits:
CVSS is part of the Payment Card Industry Data Security Standard (PCI-DSS), NIST's SCAP Project, and has been formally adopted as an international standard for scoring vulnerabilities (ITU-T X.1521).
![]()
Vulnerability Management
| IT organizations consume great resources in identifying and remediating computer vulnerabilities. Compound this with the reality that the group finding the vulnerabilities is generally not the group fixing them. This results in a resource-intensive and sometimes adversarial organizational dynamic. Managing and Auditing IT Vulnerabilities is the 6th in a series of Global Technology Audit Guides (GTAGs) published by the Institute of Internal Auditors (the IIA). We discuss the steps of first identifying, assessing then prioritizing computer vulnerabilities. We differentiate many of the characteristics of low- with high-performing vulnerability management organizations and we include a number of metrics than an organization can use to establish a datum and track their progress. We recognize that immediate benefits are achieved by remediating individual, yet critical vulnerabilities. However, as shown in the diagram, effective vulnerability management means integrating and aligning IT Security with the organization's existing IT management processes (e.g. within an ITIL framework). |
![]() |
![]()
Security Patterns
Patterns are a beautiful way of organizing and formalizing proven solutions to reoccurring problems. They were developed by Christopher Alexander in the 1970’s. Alexander observed and documented the relationships that existed between things: objects, spaces, light, people, passages, and moods. From this work emerged architectural patterns and pattern languages. This methodology was later adapted to Object Oriented (OO) programming and then Information Security. A couple of important points about patterns (especially if you ever consider writing some):
Visit Markus Schumacher's site or hillside.net for more information on security patterns.
![]()
