This is a fast-paced walk-through of the internals of defining new classes in Python. It shows what actually happens inside the Python interpreter when a new class definition is encountered and processed. Beware, this is advanced material. If the prospect of pondering the metaclass of the metaclass of your class makes you feel nauseated, you better stop now.

The focus is on the official (CPython) implementation of Python 3. For modern releases of Python 2 the concepts are similar, although there will be some slight differences in the details.

On the bytecode level

I'll start right with the bytecode, ignoring all the good work done by the Python compiler [1]. For simplicity, this function will be used to demonstrate the bytecode generated by a class definition, since it's easy to disassemble functions:

def myfunc():
    class Joe:
        attr = 100.02
        def foo(self):
            return 2

Disassembling myfunc will show us the steps needed to define a new class:

>>> dis.disassemble(myfunc.__code__)
 14           0 LOAD_BUILD_CLASS
              1 LOAD_CONST               1 (<code object Joe at 0x7fe226335b80, file "disassemble.py", line 14>)
              4 LOAD_CONST               2 ('Joe')
              7 MAKE_FUNCTION            0
             10 LOAD_CONST               2 ('Joe')
             13 CALL_FUNCTION            2
             16 STORE_FAST               0 (Joe)
             19 LOAD_CONST               0 (None)
             22 RETURN_VALUE

The number immediately preceding the instruction name is its offset in the binary representation of the code object. All the instructions until and including the one at offset 16 are for defining the class. The last two instructions are for myfunc to return None.

Let's go through them, step by step. Documentation of the Python bytecode instructions is available in the dis module.

LOAD_BUILD_CLASS is a special instruction used for creating classes. It pushes the function builtins.__build_class__ onto the stack. We'll examine this function in much detail later.

Next, a code object, followed by a name (Joe) are pushed onto the stack as well. The code object is interesting, let's peek inside:

>>> dis.disassemble(myfunc.__code__.co_consts[1])
 14           0 LOAD_FAST                0 (__locals__)
              3 STORE_LOCALS
              4 LOAD_NAME                0 (__name__)
              7 STORE_NAME               1 (__module__)
             10 LOAD_CONST               0 ('myfunc.<locals>.Joe')
             13 STORE_NAME               2 (__qualname__)

 15          16 LOAD_CONST               1 (100.02)
             19 STORE_NAME               3 (attr)

 16          22 LOAD_CONST               2 (<code object foo at 0x7fe226335c40, file "disassemble.py", line 16>)
             25 LOAD_CONST               3 ('myfunc.<locals>.Joe.foo')
             28 MAKE_FUNCTION            0
             31 STORE_NAME               4 (foo)
             34 LOAD_CONST               4 (None)
             37 RETURN_VALUE

This code defines the innards of the class. Some generic bookkeeping, followed by definitions for the attr attribute and foo method.

Now let's get back to the first disassembly. The next instruction (at offset 7) is MAKE_FUNCTION [2]. This instruction pulls two things from the stack - a name and a code object. So in our case, it gets the name Joe and the code object we saw disassembled above. It creates a function with the given name and the code object as its code and pushes it back to the stack.

This is followed by once again pushing the name Joe onto the stack. Here's what the stack looks like now (TOS means "top of stack"):

TOS> name "Joe"
     function "Joe" with code for defining the class
     function builtins.__build_class__
     -----------------------------------------------

At this point (offset 13), CALL_FUNCTION 2 is executed. The 2 simply means that the function was passed two positional arguments (and no keyword arguments). CALL_FUNCTION first takes the arguments from the stack (the rightmost on top), and then the function itself. So the call is equivalent to:

builtins.__build_class__(function defining "Joe", "Joe")

Build me a class, please

A quick peek into the builtins module in Python/bltinmodule.c reveals that __build_class__ is implemented by the function builtin___build_class__ (I'll call it BBC for simplicity) in the same file.

As any Python function, BBC accepts both positional and keyword arguments. The positional arguments are:

func, name, base1, base2, ... baseN

So we see only the function and name were passed for Joe, since it has no base classes. The only keyword argument BBC understands is metaclass [3], allowing the Python 3 way of defining metaclasses:

class SomeOtherJoe(metaclass=JoeMeta):
  [...]

So back to BBC, here's what it does [4]:

  1. The first chunk of code deals with extracting the arguments and setting defaults.
  2. Next, if no metaclass is supplied, BBC looks at the base classes and takes the metaclass of the first base class. If there are no base classes, the default metaclass type is used.
  3. If the metaclass is really a class (note that in Python any callable can be given as a metaclass), look at the bases again to determine "the most derived" metaclass.

The last point deserves a bit of elaboration. If our class has bases, then some rules apply for the metaclasses that are allowed. The metaclasses of its bases must be either subclasses or superclasses of our class's metaclass. Any other arrangement will result in this TypeError:

metaclass conflict: the metaclass of a derived class must be a (non-strict)
                    subclass of the metaclasses of all its bases

Eventually, given that there are no conflicts, the most derived metaclass will be chosen. The most derived metaclass is the one which is a subtype of the explicitly specified metaclass and the metaclasses of all the base classes. In other words, if our class's metaclass is Meta1, only one of the bases has a metaclass and that's Meta2, and Meta2 is a subclass of Meta1, it is Meta2 that will be picked to serve as the eventual metaclass of our class.

  1. At this point BBC has a metaclass [5], so it starts by calling its __prepare__ method to create a namespace dictionary for the class. If there's no such method, an empty dict is used.

As documented in the data model reference:

If the metaclass has a __prepare__() attribute (usually implemented as a class or static method), it is called before the class body is evaluated with the name of the class and a tuple of its bases for arguments. It should return an object that supports the mapping interface that will be used to store the namespace of the class. The default is a plain dictionary. This could be used, for example, to keep track of the order that class attributes are declared in by returning an ordered dictionary.
  1. The function argument is invoked, passing the namespace dict as the only argument. If we look back at the disassembly of this function (the second one), we see that the first argument is placed into the f_locals attribute of the frame (with the STORE_LOCALS instruction). In other words, this dictionary is then used to populate the class attributes. The function itself returns None - its outcome is modifying the namespace dictionary.
  2. Finally, the metaclass is called with the name, list of bases and namespace dictionary as arguments.

The last step defers to the metaclass to actually create a new class with the given definition. Recall that when some class MyKlass has a metaclass MyMeta, then the class definition of MyKlass is equivalent to [6]:

MyKlass = MyMeta(name, bases, namespace_dict)

The flow of BBC outlined above directly embodies this equivalence.

So what happens next? Well, the metaclass MyMeta is a class, right? And what happens when a class is "called"? It's instantiated. How is a class's instantiation done? By invoking its metaclass's __call__. So wait, this is the metaclass's metaclass we're talking about here, right? Yes! A metaclass is just a class, after all [7], and has a metaclass of its own - so Python has to keep the meta-flow going.

Realistically, what probably happens is this:

Most chances are that your class has no metaclass specified explicitly. Then, its default metaclass is type, so the call above is actually:

MyKlass = type(name, bases, namespace_dict)

The metaclass of type happens to be type itself, so here type.__call__ is called.

In the more complex case that your class does have a metaclass, most chances are that the metaclass itself has no metaclass [8], so type is used for it. Therefore, the MyMeta(...) call is also served by type.__call__.

type_call

In Objects/typeobject.c, the type.__call__ slot is getting mapped to the function type_call. I've already spent some time explaining how it works, so it's important to review that article at this point.

Things are a bit different here, however. The object creation sequence article explained how instances are created, so the tp_new slot called from type_call went to object. Here, since type_call will actually call tp_new on a metaclass, and the metaclass's base is type (see this diagram), we'll have to study how the type_new function (also from Objects/typeobject.c) works.

A brief recap

I feel that the flow here is relatively convoluted, so lest we lose focus, let's have a brief recap of how we got thus far. The following is a much simplified version of the flow described so far in this article:

  1. When a new class Joe is defined...
  2. The Python interpreter arranges the builtin function builtin__build_class__ (BBC) to be called, giving it the class name and its innards compiled into a code object.
  3. BBC finds the metaclass of Joe and calls it to create the new class.
  4. When any class in Python is called, it means that its metaclass's tp_call slot is invoked. So to create Joe, this is the tp_call of its metaclass's metaclass. In most cases this is the type_call function (since the metaclass's metaclass is almost always type, or something that eventually delegates to it).
  5. type_call creates a new instance of the type it's bound to by calling its tp_new slot.
  6. In our case, that is served by the type_new function.

The next section picks up from step 6.

type_new

The type_new function is a complex beast - it's over 400 lines long. There's a good reason for this, however, since it plays a very fundamental role in the Python object system. It's literally responsible for creating all Python types. I'll go over its functionality in major blocks, pasting short snippets of code where relevant.

Let's start at the beginning. The signature of type_new is:

static PyObject *
type_new(PyTypeObject *metatype, PyObject *args, PyObject *kwds)

When called to create our class Joe, the arguments will be:

  • metatype - the metaclass, so it's type itself.
  • args - we saw in the description of BBC above that this is the class name, list of base classes and a namespace dict.
  • kwds - since Joe has no metaclass, this will be empty.

At this point, it may be useful to recall that:

class Joe:
  ... contents

Is equivalent to:

Joe = type('joe', (), dict of contents)

type_new serves both approaches, of course.

It starts by handling the special 1-argument call of the type function, which returns the type. Then, it tries to see if the requested type has a metaclass that's more suitable than the one passed in. This is necessary to handle a direct call to type as shown above - if one of the bases has a metaclass, that metaclass should be used for the creation [9].

Next, type_new handles some special class methods (for example __slots__).

Finally, the type object itself is allocated and initialized. Since the unification of types and classes in Python, user-defined classes are represented similarly to built-in types inside the CPython VM. However, there's still a difference. Unlike built-in types (and new types exported by C extension) which are statically allocated and are essentially "singletons", user-defined classes have to be implemented by dynamically allocated type objects on the heap [10]. For this purpose, Include/object.h defines an "extended type object", PyHeapTypeObject. This struct starts with a PyTypeObject member, so it can be passed around to Python C code expecting any normal type. The extra information it carries is used mainly for book-keeping in the type-handling code (Objects/typeobject.c). PyHeapTypeObject is an interesting type to discuss but would deserve an article of its own, so I'll stop right here.

Just as an example of one of the special cases handled by type_new for members of new classes, let's look at __new__. The data model reference says about it:

Called to create a new instance of class cls. __new__() is a static method (special-cased so you need not declare it as such) that takes the class of which an instance was requested as its first argument.

It's interesting to see how this statement is embodied in the code of type_new:

/* Special-case __new__: if it's a plain function,
   make it a static function */
tmp = _PyDict_GetItemId(dict, &PyId___new__);
if (tmp != NULL && PyFunction_Check(tmp)) {
    tmp = PyStaticMethod_New(tmp);
    if (tmp == NULL)
        goto error;
    if (_PyDict_SetItemId(dict, &PyId___new__, tmp) < 0)
        goto error;
    Py_DECREF(tmp);
}

So when the dict of the new class has a __new__ method, it's automatically replaced with a corresponding static method.

After some more handling of special cases, type_new returns the object representing the newly created type.

Conclusion

This has been a relatively dense article. If you got lost, don't despair. The important part to remember is the flow described in "A brief recap" - the rest of the article just explains the items in that list in more detail.

The Python type system is very powerful, dynamic and flexible. Since this all has to be implemented in the low-level and type-rigid C, and at the same time be relatively efficient, the implementation is almost inevitably complex. If you're just writing Python code, you almost definitely don't have to be aware of all these details. However, if you're writing non-trivial C extensions, and/or hacking on CPython itself, understanding the contents of this article (at least on an approximate level) can be useful and educational.

Many thanks to Nick Coghlan for reviewing this article.

[1]If you're interested in the compilation part, this article provides a good overview.
[2]In the distant past, MAKE_FUNCTION was used both for creating functions and classes. However, when lexical scoping was added to Python, a new instruction for creating functions was added - MAKE_CLOSURE. So nowadays, as strange as it sounds, MAKE_FUNCTION is only used for creating classes, not functions.
[3]The other keyword arguments, if they exist, are passed to the metaclass when it's getting called.
[4]You may find it educational to open the file Python/bltinmodule.c from the Python source distribution and follow along.
[5]There always is some metaclass, because all classes eventually derive from object whose metaclass is type.
[6]With the caveat that BBC also calls __prepare__. For a more equivalent sequence, take a look at types.new_class.
[7]As I mentioned earlier, any callable can be specified as a metaclass. If the callable is a function and not a class, it's simply called as the last step of BBC - the rest of the discussion doesn't apply.
[8]I've never encountered real-world Python code where a metaclass has a metaclass of its own. If you have, please let me know - I'm genuinely curious about the use cases for such a construct.
[9]If you've noticed that this is a duplication of effort, you're right. BBC also computes the metaclass, but to handle the type(...) call, type_new has to do this again. I think that creating new classes is a rare enough occurrence that the extra work done here doesn't count for much.
[10]Since they have to be garbage collected and fully deleted when no longer needed.