Selecting a package to use in your project is a fraught process. Perhaps you’re lucky and there are several options from which to choose. How do you make that decision?
Picking a package that isn’t battle-tested may present serious bugs that have yet to be encountered. While picking a package that hasn’t seen activity in months may lead to wasted time and further deprecation of the codebase as updates arrive.
If you’re likely to trust packages that are well used by production apps under real load, you’re in the majority. Based on developer research and polling our own team, we’ve discovered a handful of indicators that help developers make the decision on which package to integrate into their app.
Why check the various metrics for package reliability every time you look for a package? Atmosphere simplifies the decision process by synthesizing these indicators into the Atmosphere Score.
One score to rule them all
There are many signals that developers rely on to get a feel for package quality, some quantifiable; others less so. Atmosphere package manager attempts to gather and surface many quantifiable metrics –without making things confusing.
However, when it comes to ordering packages in places like the search results, we’re only able to leverage a single dimension. So we need to combine these metrics into a synthetic score to determine sort order. Enter the Atmosphere Score.
The metrics we use
As of this writing we use two metrics to score a package:
- Number of downloads, both historical and recent.
- Number of days since the last release.
We plan on expanding our coverage to encompass signals like:
- Github activity
- Recent commit activity.
- Number of open/closed bugs and pull requests.
- Number of stars.
- Real app usage
- How many downloads are in unique apps
- How much traffic those apps receive.
- Atmosphere activity
- Social activity
- Twitter mentions
How we combine them (the not-so-secret sauce)
We calculate the Atmosphere score once a day. Although the package download count can increase often over the course of a day, the score itself doesn’t end up changing all that much over the course of the day. (This doesn’t hold true for recently updated packages and is something we’ll revisit in a later Atmosphere revision).
The nascent Atmosphere score formula is simple. It uses exponential decay over time to a base popularity, determined by total download counts, with a small bias towards recent downloads. It’s essentially the same algorithm as used by sites such as Reddit, Hacker News, and Telescope.
score = (installsInWeek + installsInMonth/2) * (daysSinceUpdate + 30) ^ -1.3
How we store the scores
The Atmosphere Score is the basic unit to rank packages so we must store the latest score (calculated daily) on the package document itself. Since we also want to rank trending packages (based on a 7 day change in score) we also need to register the delta on the package as well.
To calculate that delta, and to display the 7-day trend histogram on the package, we store all scores each day, for each package in a
Serving the scores and drawing the histogram
If you inspect the HTML of the sparkline on the package lockup it reveals a simple SVG that’s used to draw the graph. A package like D3 can be useful for such things.
To get the seven scores required for each package we list on the page, we use a “client side join” publication (as detailed here). You’ll notice that when the Atmosphere homepage first loads, there’s a moment when the graphs are not loaded. However, as the lockup works without them, it’s totally fine and works well for this experience.
As Meteor gains popularity and the volume of packages grow, the Atmosphere Score will be invaluable in helping developers to find the best package for their needs. With the addition of more metrics, we expect the Atmosphere Score can better determine package trust and quality.
As always, let us know your suggestions in the comments or the issue tracker. Onwards.