There’s an old quote sometimes (mis)attributed to Stalin: “The death of one man is a tragedy, the death of millions is a statistic.”1 The same thing holds regarding policy enforcement: A single customer can be watched carefully, but hundreds or thousands of customers fall to statistics.
Mass detection
Finding customers violating published policies is almost always the scalable issue we have to discuss. There are many ways to do this:
- Complaint monitoring. This may be the most basic method. You wait for people to tell you they receive messages violating published policies. As I mentioned in my last post, I generally operate with a rule of thumb that any direct complaint (where the complainant sends an email directly to your abuse queue) happens at a rate of about 1%.2 So, any well-formed complaints come with a built-in multiplier.
- Feedback Loop detection. This merely builds on complaint monitoring. All you have done is alleviate the barrier to entry for lodging complaints by offloading that responsibility to the relationship between service providers. The sad truth is that many providers look at feedback loops merely as a method of obtaining and processing opt-outs. So, despite the presence of valuable data, they are still just engaging in complaint monitoring. But, for providers who leverage this information, they have a second, richer source of information that can point out policy violations.
- Machine learning. This one becomes more complex. This is because there are many different ways to take machine learning and make it into a tool that would be useful for assessment.
- List data. Omnivore3 was one of the first products on the market to look at lists as they were uploaded and determine whether that list would be okay to mail. The great thing about this method is that it is at least attempting to find problems before the abuse gets released onto the Internet.
- Complaint modeling. This would be taking complaints (either direct or feedback loop) and using the data that it presents to help triage and order the priority cases for follow-up and investigation.
- Hybrid. Just like it sounds, this takes some list data analysis and some complaint modeling to help drive a determination to triage and order priority cases.
All of these methods have their drawbacks. Waiting for complaints to come in (whether directly or through feedback loop mechanisms) means abuse has already happened. Our preference should be to try to prevent abuse. That would mean turning to more proactive solutions, which lean more toward machine learning. But, as Laura Atkins mentioned in a blog post last year, “The problem is this is a moving target and there’s nothing set and forget about it. Algorithms like this need to be constantly maintained and trained.”4
The one thing that they all have in common is that they operate in bulk. A single direct complaint is actionable not because it is a single complaint but because it indicates a mass of complaints — both seen and unseen. Feedback loop measures happen based on a mass of complaints. And machine learning algorithms can only happen at scale. Finally, it is undoubtedly true that triaging complaint streams will necessitate tackling larger volumes of complaints before handling more minor, more individual cases.
Individual correction
Once a customer’s actions have brought about a mass of complaint metrics that warrant closer investigation, the matter turns from looking at many complaints spread over many customers to what has happened with this particular one. But, even this is a scaling exercise: Remember that “the job of policy enforcement is to limit the amount of damage done, prevent that damage from intensifying, and attempt to begin repairs to whatever damage has occurred.” Further, damage can be generated in two directions: toward the customer or the provider.5
Ultimately, policy enforcement has to ensure that damage does not scale beyond customer-oriented so that the actions of one (or a small group of) customer(s) encompass all of the mail sent out by the provider. The best way to accomplish this is by dealing with each bad customer independently. This prevents the issue from scaling to where other providers must scale their responses from customer-oriented to provider-oriented.
So, when a customer has been identified, policy enforcement agents will attempt to ascertain several things:
- If a breach of policy has occurred,
- What policy was breached,
- How extensive the breach is,
- How much reputational damage has occurred to
- the customer, and
- the company
- What will be required to fix the breach, and
- Whether the customer is willing to do the work required to come back into compliance.
Several parts of this seem to be generally intuitive. Most agents — even new ones — will handle 1-3 together as a unit and then skip to 5. But, in my opinion, ascertaining the answer to 4 will provide the surest method of getting the customer to agree to fix the breach. So, we will talk about reputational damage next time.
Footnotes
- Wikiquote contributors, Joseph Stalin, Wikiquote (2020), https://en.wikiquote.org/w/index.php?title=Joseph_Stalin&oldid=2747079#Misattributed (last visited Mar 16, 2020). ↩︎
- Mickey Chandler, Policy At Scale: Understanding The Issue, Spamtacular (Mar. 9, 2020), https://www.spamtacular.com/2020/03/09/policy-at-scale-understanding-the-issue/ (last visited Mar 16, 2020). ↩︎
- Mailchimp, About Omnivore, Mailchimp, https://mailchimp.com/help/about-omnivore/ (last visited Mar 11, 2020). ↩︎
- Laura Atkins, ESPs Are Failing Recipients, Word to the Wise (Jun. 4, 2019), https://wordtothewise.com/2019/06/esps-are-failing-recipients/ (last visited Mar 16, 2020). ↩︎
- Mickey Chandler, Enforcement Is Therapeutic, Spamtacular (Feb. 3, 2020), https://www.spamtacular.com/2020/02/03/enforcement-is-therapeutic/ (last visited Mar 16, 2020). ↩︎