Case Study: Embedding Ethics in Defense

How Compass Designed a Framework for Responsible Use
At Compass, we approach ethical consulting with a belief that responsibility isn’t an afterthought—it’s infrastructure. In our work with a major defense technology firm (here anonymized), we were asked to do more than offer a few best-practices or theoretical reflections. Our task was to build a durable, scalable framework that would allow a company producing weaponizable technology to take meaningful responsibility for its products. This framework went beyond just the design process, scaling to every phase of the products’ life cycles. The stakes were high. As we’ve seen time and again in the tech sector, powerful tools, once out in the world, often exceed the intentions of their creators. Our goal was to reduce this ethical drift. We weren’t seeking utopia or purity, but a system of credible guardrails—a way for our client to engage the defense market without compromising their moral commitments or becoming complicit in human rights abuses. The result of that engagement was a four-part framework that can, we believe, help shape the ethical architecture of the entire defense industry.
Customer Eligibility
The first challenge we tackled was foundational: how to decide who gets to buy the technology in the first place. We proposed a structured screening process that evaluates potential customers on four core dimensions. They were:
- Their formal commitments to arms control treaties,
- Their actual human rights records,
- The transparency and fairness of their governance institutions, and
- The behavior of their militaries.
Each dimension was scored using publicly available, peer-reviewed data sets: Freedom House’s governance index and Human Rights Scores composite, and reporting from the Geneva Centre for the Democratic Control of Armed Forces.
What was critical, however, was not just measuring these dimensions but setting red-lines. Rather than balancing out poor scores in one area with better ones in another, we insisted that certain ethical thresholds must be met across all four criteria. If a potential customer failed to ratify the Arms Trade Treaty, or if their governance record consistently reflected authoritarianism, they would be excluded from consideration, regardless of market opportunity. For those who met the baseline standards, we introduced a tier system:
- Tier I partners demonstrated excellence across most or all categories and would be treated as standard-risk clients.
- Tier II partners cleared the ethical bar but carried higher risk profiles, and so required additional oversight and contractual conditions.
Just Use
Establishing a rigorous customer-screening process was only the beginning. Even customers with admirable records must be held to consistent standards of use. The second phase of our framework translated longstanding principles from just war theory and international humanitarian law into detailed Acceptable Use Policies. These were not vague codes of conduct, but enforceable agreements, contractually binding and tailored to the ethical profile of each client. All customers were required to commit to using the product only in accordance with international laws of armed conflict. That meant prohibiting attacks on civilians, limiting harm to proportional levels, and ensuring that operators of the technology were properly trained—not just in mechanics, but in ethics.
The policies also introduced operational requirements. Clients had to verify that only certified personnel were authorized to use the products. They were obliged to document and report any incident that resulted in injury, death, or significant property damage. And they had to agree to graduated levels of monitoring, including the possibility of submitting to third-party audits or sharing encrypted operational data via embedded hardware.
Monitoring
Naturally, these monitoring systems became the subject of the third component in our framework: oversight. Here we had to strike a careful balance. Oversight mechanisms had to be robust enough to detect misuse, but not so invasive that they violated client confidentiality or undermined operational security. We approached this challenge by thinking in layers. At a minimum, all clients would be required to submit routine usage reports and incident summaries. For higher-risk clients, we introduced provisions for external auditing—by military legal experts, arms control specialists, or human rights monitors with appropriate security clearances.
Beyond human reporting, we explored technological avenues for oversight. We proposed embedding data chips in the product itself—functionally analogous to a flight data recorder—that could document usage metrics such as operator ID, firing patterns, target acquisition data, and even onboard visual and audio feeds. These data would not necessarily be streamed in real-time, but could be accessed during audits or after any flagged incident. In some configurations, a subset of this data could be transmitted wirelessly, with proper anonymization or encryption, to allow near real-time compliance tracking without jeopardizing sensitive information.
Enforcement
Designing these monitoring systems forced us to confront the final question: what happens when a client breaks the rules? Enforcement is the point at which many well-meaning ethical policies collapse. Either they are too vague to act upon, or companies are reluctant to follow through when doing so risks profits or political backlash. We took a different approach. Our fourth component presented a graduated series of responses, ranging from modest interventions to decisive termination of access.
We laid out a five-step enforcement sequence:
- Authentication Controls: Only trained, pre-approved users can operate the system.
- Severance of Business Relationship: Halt support and future sales to violators.
- Public Disclosure and Censure: Make violations known to regulators and watchdogs.
- Removal of Technical Support: Without updates or assistance, systems become inoperable.
- Proactive Remote Disabling (by Customer): A last-resort “kill switch” if a device is captured or hijacked.
Taken together, the four components—eligibility, usage, monitoring, and enforcement—form a self-reinforcing whole. The framework creates overlapping safeguards that reduce the chances of misuse, increase transparency, and empower companies to act decisively when harm occurs.
But perhaps more importantly, it changes the default posture. Instead of reacting to scandals or controversies after the fact, companies that adopt this model lead with accountability. They make responsibility a prerequisite, not a retroactive fix.
We do not pretend this framework is flawless or final. There are open questions—technical, legal, and political—that will continue to evolve. But what we have proven is that serious ethical stewardship in defense is not only possible, but practicable. And for the industry as a whole, we think it’s time to move from aspiration to action.
At Compass, we don’t view ethics as a liability or a PR shield. We view it as architecture. And in this case, we were proud to help design the blueprint.