Markus v1.0 released! Better metrics API for Python projects.

, | Tweet this

What is it?

Markus is a Python library for generating metrics.

Markus makes it easier to generate metrics in your program by:

  • providing multiple backends (Datadog statsd, statsd, logging, logging roll-up, and so on) for sending data to different places
  • sending metrics to multiple backends at the same time
  • providing a testing framework for easy testing
  • providing a decoupled architecture making it easier to write code to generate metrics without having to worry about making sure creating and configuring a metrics client has been done--similar to the Python logging Python logging module in this way

I use it at Mozilla in the collector of our crash ingestion pipeline. Peter used it to build our symbols lookup server, too.

v1.0 released!

This is the v1.0 release. I pushed out v0.2 back in April 2017. We've been using it in Antenna (the collector of the Firefox crash ingestion pipeline) since then. At this point, I think the API is sound and it's being used in production, ergo it's production-ready.

This release also adds Python 2.7 support.

Why you should take a look at Markus

Markus does three things that make generating metrics a lot easier.

First, it separates creating and configuring the metrics backends from generating metrics.

Let's create a metrics client that sends data nowhere:

import markus

markus.configure()

That's not wildly helpful, but it works and it's 2 lines.

Say we're doing development on a laptop on a speeding train and want to spit out metrics to the Python logging module so we can see what's being generated. We can do this:

import markus

markus.configure(
    backends=[
        {
            'class': 'markus.backends.logging.LoggingMetrics'
        }
    ]
)

That will spit out lines to Python logging. Now I can see metrics getting generated while I'm testing my code.

I'm ready to put my code in production, so let's add a statsd backend, too:

import markus

markus.configure(
    backends=[
        {
            # Log metrics to the logs
            'class': 'markus.backends.logging.LoggingMetrics',
        },
        {
            # Log metrics to statsd
            'class': 'markus.backends.statsd.StatsdMetrics',
            'options': {
                'statsd_host': 'statsd.example.com',
                'statsd_port': 8125,
                'statsd_prefix': '',
            }
        }
    ]
)

That's it. Tada!

Markus can support any number of backends. You can send data to multiple statsd servers. You can use the LoggingRollupBackend which will generate statistics every flush_interval of count, current, min, and max for incr stats and count, min, average, median, 95%, and max for timing/histogram stats for metrics data.

If Markus doesn't have the backends you need, writing your own metrics backend is straight-forward.

For more details, see the usage documentation and the backends documentation.

Second, writing code to generate metrics is straight-forward and easy to do.

Much like the Python logging module, you add import markus at the top of the Python module and get a metrics interface. The interface can be module-level or in a class. It doesn't matter.

Here's a module-level metrics example:

import markus

metrics = markus.get_metrics(__name__)

Then you use it:

@metrics.timer_decorator('chopping_vegetables')
def some_long_function(vegetable):
    for veg in vegetable:
        chop_vegetable()
        metrics.incr('vegetable', 1)

That's it. No bootstrapping problems, nice handling of metrics key prefixes, decorators, context managers, and so on. You can use multiple metrics interfaces in the same file. You can pass them around. You can reconfigure the metrics client and backends dynamically while your program is running.

For more details, see the metrics overview documentation.

Third, testing metrics generation is easy to do.

Markus provides a MetricsMock to make testing easier:

import markus
from markus.testing import MetricsMock


def test_something():
    with MetricsMock() as mm:
        # ... Do things that might publish metrics

        # This helps you debug and write your test
        mm.print_records()

        # Make assertions on metrics published
        assert mm.has_metric(markus.INCR, 'some.key', {'value': 1})

I use it with pytest on my projects, but it is testing-system agnostic.

For more details, see the testing documentation.

Why not use statsd directly?

You can definitely use statsd/dogstatsd libraries directly, but using Markus is a lot easier.

With Markus you don't have to worry about the order in which you create/configure the statsd client versus using the statsd client. You don't have to pass around the statsd client. It's a lot easier to use in Dango and Flask where bootstrapping the app and passing things around is tricky sometimes.

With Markus you get to degrade to sending metrics data to the Python logging library which helps surface issues in development. I've had a few occasions when I thought I wrote code to send data, but it turns out I hadn't or that I had messed up the keys or tags.

With Markus you get a testing mock which lets you write tests guaranteeing that your code is generating metrics the way you're expecting.

If you go with using the statsd/dogstatsd libraries directly, that's fine, but you'll probably want to write some/most of these things yourself.

Where to go for more

For more specifics on this release, see here: https://markus.readthedocs.io/en/latest/history.html#october-30th-2017

Documentation and quickstart here: https://markus.readthedocs.io/en/latest/index.html

Source code and issue tracker here: https://github.com/willkg/markus

Let me know whether this helps you!

rob-bugson 1.0: or how I wrote a webextension

, | Tweet this

I work on Socorro and other projects which use GitHub for version control and code review and use Mozilla's Bugzilla for bug tracking.

After creating a pull request in GitHub, I attach it to the related Bugzilla bug which is a contra-dance of clicking and copy-and-paste. Github tweaks for Bugzilla simplified that by adding a link to the GitHub pull request page that I could click on, edit, and then submit the resulting form. However, that's a legacy addon and I use Firefox Nightly and it doesn't look like anyone wrote a webextension version of it, so I was out-of-luck.

Today, I had to bring in my car for service and was sitting around at the dealership for a few hours. I figured instead of working on Socorro things, I'd take a break and implement an attach-pr-to-bug webextension.

I've never written a webextension before. I had written a couple of addons years ago using the SDK and then Jetpack (or something like that). My JavaScript is a bit rusty, especially ES6 stuff. I figured this would be a good way to learn about webextensions.

It took me about 4 hours of puzzling through docs, writing code, and debugging and then I had something that worked. Along the way, I discovered exciting things like:

  • host permissions let you run content scripts in web pages
  • content scripts can't access browser.tabs--you need a background script for that
  • you can pass messages from content scripts to background scripts
  • seems like everything returns a promise, but async/await make that a lot easier to work with
  • the attachment page on Bugzilla isn't like the create-bug page and ignores querystring params

The MDN docs for writing webextensions and the APIs involved are fantastic. The webextension samples are also great--I started with them when I was getting my bearings.

I created a new GitHub repository. I threw the code into a pull request making it easier for someone else to review it. Mike Cooper kindly skimmed it and provided insightful comments. I fixed the issues he brought up.

TheOne helped me resurrect my AMO account which I created in 2012 back when Gaia apps were the thing.

I read through Publishing your webextension, generated a .zip, and submitted a new addon.

About 10 minutes later, the addon had been reviewed and approved.

Now it's a thing and you can install rob-bugson.

Socorro signature generation overhaul and command line interface

, | Tweet this

Summary

This quarter I worked on creating a command line interface for signature generation and in doing that extracted it from the processor into a standalone-ish module.

The end result of this work is that:

  1. anyone making changes to signature generation can can test the changes out on their local machine using a Socorro local development environment
  2. I can trivially test incoming signature generation changes--this both saves me time and gives me a much higher confidence of correctness without having to merge the code and test it in our -stage environment [1]
  3. we can research and experiment with changes to the signature generation algorithm and how that affects existing crash signatures
  4. it's a step closer to being usable by other groups

This blog post talks about that work briefly and then talks about some of the things I've been able to do with it.

[1] I can't overstate how awesome this is.

Read more…

Socorro local development environment

, | Tweet this

Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

This (long-ish) blog post talks about how when I started on Socorro, there wasn't really a local development environment and how I went on a magical journey through dark forests and craggy mountains to find one.

If you do anything with Socorro at Mozilla, you definitely want to at least read the "Tell me more about this local development environment" part.

Read more…

Socorro and Firefox 57

, | Tweet this

Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro--specifically the Socorro collector.

Teams at Mozilla are feverishly working on Firefox 57. That's super important work and we're getting down to the wire. Socorro is a critical part of that development work as it collects incoming crashes, processes them, and has tools for analysis.

This blog post covers some of the things Socorro engineering has been doing to facilitate that work and what we're planning from now until Firefox 57 release.

This quarter

This quarter, we replaced Snappy with Tecken for more reliable symbol lookup in Visual Studio and other clients.

We built a Docker-based local dev environment for Socorro making it easier to run Socorro on your local machine configured like crash-stats.mozilla.com. It now takes five steps to getting Socorro running on your computer.

We also overhauled the signature generation system in Socorro and slapped on a command-line interface. Now you can test the effects of signature generation changes on specific crashes as well as groups of crashes on your local machine.

We've also been fixing stability issues and bugs and myriad other things.

Now until Firefox 57

Starting today and continuing until after Firefox 57 release, we are:

  1. prioritizing your signature generation changes, getting them landed, and pushing them to -prod
  2. triaging Socorro bugs into "need it right now" and "everything else" buckets
  3. deferring big changes to Socorro until after Firefox 57 including API endpoint deprecation, major UI changes to the crash-stats interface, and other things that would affect your workflow

We want to make sure crash analysis is working as best as it can so you can do the best you can so we can have a successful Firefox 57.

Please contact us if you need something!

We hang out on #breakpad on irc.mozilla.org. You can also write up bugs.

Hopefully this helps. If not, let us know!

Soloists: code review on a solo project

, | Tweet this

Summary

I work on some projects with other people, but I also spend a lot of time working on projects by myself. When I'm working by myself, I have difficulties with the following:

  1. code review
  2. bouncing ideas off of people
  3. peer programming
  4. long slogs
  5. getting help when I'm stuck
  6. publicizing my work
  7. dealing with loneliness
  8. going on vacation

I started a #soloists group at Mozilla figuring there are a bunch of other Mozillians who are working on solo projects and maybe if we all work alone together, then that might alleviate some of the problems of working solo. We hang out in the #soloists IRC channel on irc.mozilla.org. If you're solo, join us!

I keep thinking about writing a set of blog posts for things we've talked about in the channel and how I do things. Maybe they'll help you.

This one covers code review.

Read more…

Antenna: post-mortem and project wrap-up

, | Tweet this

Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro--specifically the Socorro collector.

The Socorro collector is one of several components that comprise Socorro. Each of the components has different uptime requirements and different security risk profiles. However, all the code is maintained in a single repository and we deploy everything every time we do a deploy. This is increasingly inflexible and makes it difficult for us to make architectural changes to Socorro without affecting everything and incurring uptime risk for components that have high uptime requirements.

Because of that, in early 2016, we embarked on a rearchitecture to split out some components of Socorro into separate services. The first component to get split out was the Socorro collector since it needs has the highest uptime requirements of all the Socorro components, but rarely changes, so it'd be a lot easier to meet those requirements if it was separate from the rest of Socorro.

Thus I was tasked with splitting out the Socorro collector and this blog post covers that project. It's a bit stream-of-consciousness, because I think there's some merit to explaining the thought process behind how I did the work over the course of the project for other people working on projects.

Read more…

The Soloists

, | Tweet this

Building Firefox is a big endeavor. There are many teams and projects covering initiatives, maintenance, bug fixing, triage, localization, support, understanding feedback, marketing, communication, releasing, supporting infrastructure, crash analysis, and a bazillion other activities all to build a family of browsers and applications.

Teams and projects aren't static. People move around as priorities change and the landscape shifts and projects complete or are scuttled.

Sometimes projects get started up with a single person. Sometimes all the people except one move off a project. Sometimes we find ourselves working alone, in a basement office, with only a stapler equivalent to keep us company.

We are the soloists. You wouldn't believe the list of things we work on. Alone.

Where to find soloists: IRC, Slack

There's an IRC channel #soloists on irc.mozilla.org.

There's also a Slack channel #soloists on the Mozilla Slack [1].

These two places (and whatever other places soloists want to hang out at) are places where we can:

  • find some solace from the weary drudgery of being alone on their projects for days on end
  • ask for help
  • bounce ideas off each other
  • vent frustrations in a friendly forgiving place
  • get advice on dealing with things like code reviews and how to go on vacation
  • get recognition for a job well done

and a variety of other things that alleviate many of the problems we have as soloists.

[1] I just created it, so it's kind of empty. I'm feeling alone in the #soloists Slack channel. So alone.

Stickers at the All Hands!

Over the last month or so, we spent some time figuring out #soloists stickers because we like stickers and you like stickers and everyone likes stickers.

They look like this:

/images/soloist_2017_handdrawn.thumbnail.png

Soloist 2017 sticker.

They're 2" by 2" and round. They're warm to the touch. They make you want to climb things. By yourself. Alone. With appropriate safety gear. [2]

If you're a soloist, come find one of us and get a sticker. Also, consider joining soloist channels.

If you support soloists, come find one of us and get a sticker. Ask us about the things we're working on. We may be solo, but we're working on real projects that almost certainly affect you. As a group, we did great things in the last 6 months. Alone. So alone.

[2] That's how they make me feel, anyhow.

Using Localstack for a fake AWS S3 for local development

, | Tweet this

Summary

Over the last year, I rewrote the Socorro collector which is the edge of the Mozilla crash ingestion pipeline. Antenna (the new collector) receives crashes from Breakpad clients as HTTP POSTs, converts the POST payload into JSON, and then saves a bunch of stuff to AWS S3. One of the problems with this is that it's a pain in the ass to do development on my laptop without being connected to the Internet and using AWS S3.

This post covers the various things I did to have a locally running fake AWS S3 service.

Read more…

Everett v0.9 released and why you should use Everett

, | Tweet this

What is it?

Everett is a Python configuration library.

Configuration with Everett:

  • is composeable and flexible
  • makes it easier to provide helpful error messages for users trying to configure your software
  • supports auto-documentation of configuration with a Sphinx autocomponent directive
  • supports easy testing with configuration override
  • can pull configuration from a variety of specified sources (environment, ini files, dict, write-your-own)
  • supports parsing values (bool, int, lists of things, classes, write-your-own)
  • supports key namespaces
  • supports component architectures
  • works with whatever you're writing--command line tools, web sites, system daemons, etc

Everett is inspired by python-decouple and configman.

v0.9 released!

This release focused on overhauling the Sphinx extension. It now:

  • has an Everett domain
  • supports roles
  • indexes Everett components and options
  • looks a lot better

This was the last big thing I wanted to do before doing a 1.0 release. I consider Everett 0.9 to be a solid beta. Next release will be a 1.0.

Why you should take a look at Everett

At Mozilla, I'm using Everett 0.9 for Antenna which is running in our -stage environment and will go to -prod very soon. Antenna is the edge of the crash ingestion pipeline for Mozilla Firefox.

When writing Antenna, I started out with python-decouple, but I didn't like the way python-decouple dealt with configuration errors (it's pretty hands-off) and I really wanted to automatically generate documentation from my configuration code. Why write the same stuff twice especially where it's a critical part of setting Antenna up and the part everyone will trip over first?

Here's the configuration documentation for Antenna:

http://antenna.readthedocs.io/en/latest/configuration.html#application

Here's the index which makes it easy to find things by component or by option (in this case, environment variables):

http://antenna.readthedocs.io/en/latest/genindex.html

When you configure Antenna incorrectly, it spits out an error message like this:

1  <traceback omitted, but it'd be here>
2  everett.InvalidValueError: ValueError: invalid literal for int() with base 10: 'foo'
3  namespace=None key=statsd_port requires a value parseable by int
4  Port for the statsd server
5  For configuration help, see https://antenna.readthedocs.io/en/latest/configuration.html

So what's here?:

  • Block 1 is the traceback so you can trace the code if you need to.
  • Line 2 is the exception type and message
  • Line 3 tells you the namespace, key, and parser used
  • Line 4 is the documentation for that specific configuration option
  • Line 5 is the "see also" documentation for the component with that configuration option

Is it beautiful? No. [1] But it gives you enough information to know what the problem is and where to go for more information.

Further, in Python 3, Everett will always raise a subclass of ConfigurationError so if you don't like the output, you can tailor it to your project's needs. [2]

First-class docs. First-class configuration error help. First-class testing. This is why I created Everett.

If this sounds useful to you, take it for a spin. It's almost a drop-in replacement for python-decouple [3] and os.environ.get('CONFIGVAR', 'default_value') style of configuration.

Enjoy!

[1] I would love some help here--making that information easier to parse would be great for a 1.0 release.
[2] Python 2 doesn't support exception chaining and I didn't want to stomp on the original exception thrown, so in Python 2, Everett doesn't wrap exceptions.
[3] python-decouple is a great project and does a good job at what it was built to do. I don't mean to demean it in any way. I have additional requirements that python-decouple doesn't do well and that's where I'm coming from.

Where to go for more

For more specifics on this release, see here: http://everett.readthedocs.io/en/latest/history.html#april-7th-2017

Documentation and quickstart here: https://everett.readthedocs.org/en/v0.9/

Source code and issue tracker here: https://github.com/willkg/everett