2 min read

Creating a custom theme

Ghost comes with a beautiful default theme designed for publishers which can easily be adapted for most purposes, or you can build a custom theme to suit your needs.

Ghost themes

Ghost comes with a default theme called Casper, which is designed to be a clean, readable publication layout and can be easily adapted for most purposes.

If you need something a little more customised, it's entirely possible to build on top of existing open source themes, or to build your own from scratch. Rather than giving you a few basic settings which act as a poor proxy for code, we just let you write code.

Marketplace

There are a huge range of both free and premium pre-built themes which you can download from the Ghost Theme Marketplace:

Ghost theme marketplace screenshot
Anyone can write a completely custom Ghost theme with some solid knowledge of HTML and CSS

Theme development

Ghost themes are written with a templating language called handlebars, which has a set of dynamic helpers to insert your data into template files. For example: {{author.name}} outputs the name of the current author.

The best way to learn how to write your own Ghost theme is to have a look at the source code for Casper, which is heavily commented and should give you a sense of how everything fits together.

  • default.hbs is the main template file, all contexts will load inside this file unless specifically told to use a different template.
  • post.hbs is the file used in the context of viewing a post.
  • index.hbs is the file used in the context of viewing the home page.
  • and so on

We've got full and extensive theme documentation which outlines every template file, context and helper that you can use. You can also get started with our useful starter theme, which includes the most common foundations and components required to build your own theme.

If you want to chat with other people making Ghost themes to get any advice or help, there's also a themes section on our public Ghost forum.

We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet 1 classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet 2 (by contrast, Moore’s Law 3 would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.

Read Paper

Algorithmic improvement is a key factor driving the advance of AI. It’s important to search for measures that shed light on overall algorithmic progress, even though it’s harder than measuring such trends in compute. 4

44x less compute required to get to AlexNet performance 7 years later
201220132014201520162017201820190.050.10.51510Teraflop/s-daysGoogLeNetResnet-18Resnet-34Resnet-50Squeezenet_v1_1DenseNet121MobileNet_v1ShuffleNet_v1_1xShuffleNet_v2_1xMobileNet_v2EfficientNet-b0VGG-11AlexNetResNext_50ShuffleNet_v2_1_5xWide_ResNet_50
Total amount of compute in teraflops/s-days used to train to AlexNet level performance. Lowest compute points at any given time shown in blue, all points measured shown in gray.25678910111213141516

Download charts

Measuring efficiency

Algorithmic efficiency can be defined as reducing the compute needed to train a specific capability. Efficiency is the primary way we measure algorithmic progress on classic computer science problems like sorting. Efficiency gains on traditional problems like sorting are more straightforward to measure than in ML because they have a clearer measure of task difficulty. [1]

However, we can apply the efficiency lens to machine learning by holding performance constant. Efficiency trends can be compared across domains like DNA sequencing17 (10-month doubling), solar energy18 (6-year doubling), and transistor density3 (2-year doubling).

We are standardizing OpenAI’s deep learning framework on PyTorch. In the past, we implemented projects in many frameworks depending on their relative strengths. We’ve now chosen to standardize to make it easier for our team to create and share optimized implementations of our models.

Browse Microscope
$ pip install procgen # install
$ python -m procgen.interactive --env-name starpilot # human
$ python <<EOF # random AI agent
import gym
env = gym.make('procgen:procgen-coinrun-v0')
obs = env.reset()
while True:
    obs, rew, done, info = env.step(env.action_space.sample())
    env.render()
    if done:
        break
EOF

Design principles

We’ve designed all Procgen environments to satisfy the following criteria:

Paper Environment Code Training Code

OpenAI builds free software for training,
benchmarking, and experimenting with AI.