1 min read

Welcome to Ghost

Welcome, it's great to have you here. We know that first impressions are important, so we've populated your new site with some initial getting started posts that will help you get familiar with everything in no time.

A few things you should know

  1. Ghost is designed for ambitious, professional publishers who want to actively build a business around their content. That's who it works best for.
  2. The entire platform can be modified and customised to suit your needs. It's very powerful, but does require some knowledge of code. Ghost is not necessarily a good platform for beginners or people who just want a simple personal blog.
  3. It's possible to work with all your favourite tools and apps with hundreds of integrations to speed up your workflows, connect email lists, build communities and much more.

Behind the scenes

Ghost is made by an independent non-profit organisation called the Ghost Foundation. We are 100% self funded by revenue from our Ghost(Pro) service, and every penny we make is re-invested into funding further development of free, open source technology for modern publishing.

The version of Ghost you are looking at right now would not have been made possible without generous contributions from the open source community.

Next up, the editor

The main thing you'll want to read about next is probably: the Ghost editor. This is where the good stuff happens.

By the way, once you're done reading, you can simply delete the default Ghost user from your team to remove all of these introductory posts!

We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet 1 classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet 2 (by contrast, Moore’s Law 3 would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.

Read Paper

Algorithmic improvement is a key factor driving the advance of AI. It’s important to search for measures that shed light on overall algorithmic progress, even though it’s harder than measuring such trends in compute. 4

44x less compute required to get to AlexNet performance 7 years later
201220132014201520162017201820190.050.10.51510Teraflop/s-daysGoogLeNetResnet-18Resnet-34Resnet-50Squeezenet_v1_1DenseNet121MobileNet_v1ShuffleNet_v1_1xShuffleNet_v2_1xMobileNet_v2EfficientNet-b0VGG-11AlexNetResNext_50ShuffleNet_v2_1_5xWide_ResNet_50
Total amount of compute in teraflops/s-days used to train to AlexNet level performance. Lowest compute points at any given time shown in blue, all points measured shown in gray.25678910111213141516

Download charts

Measuring efficiency

Algorithmic efficiency can be defined as reducing the compute needed to train a specific capability. Efficiency is the primary way we measure algorithmic progress on classic computer science problems like sorting. Efficiency gains on traditional problems like sorting are more straightforward to measure than in ML because they have a clearer measure of task difficulty. [1]

However, we can apply the efficiency lens to machine learning by holding performance constant. Efficiency trends can be compared across domains like DNA sequencing17 (10-month doubling), solar energy18 (6-year doubling), and transistor density3 (2-year doubling).

We are standardizing OpenAI’s deep learning framework on PyTorch. In the past, we implemented projects in many frameworks depending on their relative strengths. We’ve now chosen to standardize to make it easier for our team to create and share optimized implementations of our models.

Browse Microscope
$ pip install procgen # install
$ python -m procgen.interactive --env-name starpilot # human
$ python <<EOF # random AI agent
import gym
env = gym.make('procgen:procgen-coinrun-v0')
obs = env.reset()
while True:
    obs, rew, done, info = env.step(env.action_space.sample())
    env.render()
    if done:
        break
EOF

Design principles

We’ve designed all Procgen environments to satisfy the following criteria:

Paper Environment Code Training Code

OpenAI builds free software for training,
benchmarking, and experimenting with AI.