Sunday, August 28, 2016

luminance designs

luminance-0.7.0 was released a few days ago and I decided it was time to explain exactly what luminance is and what were the design choices I made. After a very interesting talk with nical about other rust graphics frameworks (e.g. gfx, glium, vulkano, etc.), I thought it was time to give people some more information about luminance and how to compare it to other frameworks.

Origin

luminance started as a Haskell package, extracted from a “3D engine” I had been working on for a while called quaazar. I came to the realization that I wasn’t using the Haskell garbage collector at all and that I could benefit from using a language without GC. Rust is a very famous language and well appreciated in the Haskell community, so I decided to jump in and learn Rust. I migrated luminance in a month or two. The mapping is described in this blog entry.

What is luminance for?

I’ve been writing 3D applications for a while and I always was frustrated by how OpenGL is badly designed. Let’s sum up the lack of design of OpenGL:

  • weakly typed: OpenGL has types, but… it actually does not. GLint, GLuint or GLbitfield are all defined as aliases to primary and integral types (i.e. something like typedef float GLfloat). Try it with grep -Rn "typedef [a-zA-Z]* GLfloat" /usr/include/GL. This leads to the fact that framebuffers, textures, shader stages, shader program or even uniforms, etc. have the same type (GLuint, i.e. unsigned int). Thus, a function like glCompileShader expects a GLuint as argument, though you can pass a framebuffer, because it’s also represented as a GLuint – very bad for us. It’s better to consider that those are just untyped – :( – handles.
  • runtime overhead: Because of the point above, functions cannot assume you’re passing a value of a the expected type – e.g. the example just above with glCompileShader and a framebuffer. That means OpenGL implementations have to check against all the values you’re passing as arguments to be sure they match the type. That’s basically several tests for each call of an OpenGL function. If the type doesn’t match, you’re screwed and see the next point.
  • error handling: This is catastrophic. Because of the runtime overhead, almost all functions might set the error flag. You have to check the error flag with the glGetError function, adding a side-effect, preventing parallelism, etc.
  • global state: OpenGL works on the concept of global mutation. You have a state, wrapped in a context, and each time you want to do something with the GPU, you have to change something in the context. Such a context is important; however, some mutations shouldn’t be required. For instance, when you want to change the value of an object or use a texture, OpenGL requires you to bind the object. If you forget to bind for the next object, the mutation will occurs on the first object. Side effects, side effects…

The goal of luminance is to fix most of those issues by providing a safe, stateless and elegant graphics framework. It should be as low-level as possible, but shouldn’t sacrifice runtime performances – CPU charge as well as memory bandwidth. That is why if you know how to program with OpenGL, you won’t feel lost when getting your feet wet with luminance.

Because of the many OpenGL versions and other technologies (among them, vulkan), luminance has an extra aim: abstract over the trending graphics API.

Types in luminance

In luminance, all graphics resources – and even concepts – have their own respective type. For instance, instead of GLuint for both shader programs and textures, luminance has Program and Texture. That ensures you don’t pass values with the wrong types.

Because of static warranties provided by compile-time, with such a scheme of strong-typing, the runtime shouldn’t have to check for type safety. Unfortunately, because luminance wraps over OpenGL in the luminance-gl backend, we can only add static warranties; we cannot remove the runtime overhead.

Error handling

luminance follows the Rust conventions and uses the famous Option and Result types to specify errors. You will never have to check against a global error flag, because this is just all wrong. Keep in mind, you have the try! macro in your Rust prelude; use it as often as possible!

Even though Rust needs to provide an exception handler – i.e. panics – there’s no such thing as exceptions in Rust. The try! macro is just syntactic sugar to:

match result {
  Ok(x) => x,
  Err(e) => return e
}

Stateless

luminance is stateless. That means you don’t have to bind an object to be able to use it. luminance takes care of that for you in a very simple way. To achieve this and keep performances running, it’s required to add a bit of high-level to the OpenGL API by leveraging how binds should happen.

Whatever the task you’re trying to reach, whatever computation or problem, it’s always better to gather / group the computation by batches. A good example of that is how magnetic hard drive disks work or your RAM. If you spread your data across the disk region (fragmented data) or across several non-contiguous addresses in your RAM, it will end up by unnecessary moves. The hard drive’s head will have to go all over the disk to gather the information, and it’s very likely you’ll destroy the RAM performance (and your CPU caches) if you don’t put the data in a contiguous area.

If you don’t group your OpenGL resources – for instances, you render 400 objects with shader A, 10 objects with shader B, then 20 objects with shader A, 32 objects with shader C, 349 objects with shader A and finally 439 objects with shader B, you’ll add more OpenGL calls to the equation – hence more global state mutations, and those are costly.

Instead of this:

  1. 400 objects with shader A
  2. 10 objects with shader B
  3. 20 objects with shader A
  4. 32 objects with shader C
  5. 348 objects with shader A
  6. 439 objects with shader B

luminance forces you to group your resources like this:

  1. 400 + 20 + 348 objects with shader A
  2. 10 + 439 objects with shader B
  3. 32 objects with shader C

This is done via types called Pipeline, ShadingCommand and RenderCommand.

Pipelines

A Pipeline gathers shading commands under a Framebuffer. That means that all ShadingCommand embedded in the Pipeline will output to the embedded Framebuffer. Simple, yet powerful, because we can bind the framebuffer when executing the pipeline and don’t have to worry about framebuffer until the next execution of another Pipeline.

ShadingCommand

A ShadingCommand gathers render commands under a shader Program along with an update function. The update function is used to customize the Program by providing uniforms – i.e. Uniform. If you want to change a Programs Uniform once a frame – and only if the Program is only called once in the frame – it’s the right place to do it.

All RenderCommand embedded in the ShadingCommand will be rendered using the embedded shader Program. Like with the Pipeline, we don’t have to worry about binding: we just have to use the embedded shader program when executing the ShadingCommand, and we’ll bind another program the next time a ShadingCommand is ran!

RenderCommand

A RenderCommand gathers all the information required to render a Tessellation, that is:

  • the blending equation, source and destination blending factors
  • whether the depth test should be performed
  • an update function to update the Program being in use – so that each object can have different properties used in the shader program
  • a reference to the Tessellation to render
  • the number of instances of the Tessellation to render
  • the size of the rasterized points (if the Tessellation contains any)

What about shaders?

Shaders are written in… the backend’s expected format. For OpenGL, you’ll have to write GLSL. The backends automatically inserts the version pragma (#version 330 core for OpenGL 3.3 for instance). In the first place, I wanted to migrate cheddar, my Haskell shader EDSL. But… the sad part of the story is that Rust is – yet – unable to handle that kind of stuff correctly. I started to implement an EDSL for luminance with macros. Even though it was usable, the error handling is seriously terrible – macros shouldn’t be used for such an important purpose. Then some rustaceans pointed out I could implement a (rustc) compiler plugin. That enables the use of new constructs directly into Rust by extending its syntax. This is great.

However, with the hindsight, I will not do that. For a very simple reason. luminance is, currently, simple, stateless and most of all: it works! I released a PC demo in Köln, Germany using luminance and a demoscene graphics framework I’m working on:

pouët.net link

youtube capture

ion demoscene framework

While developping Céleri Rémoulade, I decided to bake the shaders directly into Rust – to get used to what I had wanted to build, i.e., a shader EDSL. So there’re a bunch of constant &'static str everywhere. Each time I wanted to make a fix to a shader, I had to leave the application, make the change, recompile, rerun… I’m not sure it’s a good thing. Interactive programming is a very good thing we can enjoy – yes, even in strongly typed languages ;).

I saw that gfx doesn’t have its own shader EDSL either and requires you to provide several shader implementations (one per backend). I don’t know; I think it’s not that bad if you only target a single backend (i.e. OpenGL 3.3 or Vulkan). Transpiling shaders is a thing, I’ve been told…

sneaking out…

Feel free to dig in the code of Céleri Rémoulade here. It’s demoscene code, so it had been rushed on before the release – read: it’s not as clean as I wanted it to be.

I’ll provide you with more information in the next weeks, but I prefer spending my spare time writing code than explaining what I’m gonna do – and missing the time to actually do it. ;)

Keep the vibe!

Monday, July 25, 2016

luminance-0.6.0 sample

It’s been two weeks luminance is at version 0.6.0. I’m very busy currently but I decided to put some effort into making a very minimalistic yet usable sample. The sample uses luminance and luminance-gl (the OpenGL 3.3 backend being the single one available for now).

You’ll find it here. The code is heavily commented and you can of course clone the repository and and run the executable with cargo.

I’ll post a more detailed blog post about the application I’m building with luminance right now later on.

Keep the vibe! :)

Friday, April 29, 2016

Porting a Haskell graphics framework to Rust (luminance)

I wanted to write that new article to discuss about something important I’ve been doing for several weeks. It’s actually been a month that I’ve been working on luminance, but not in the usual way. Yeah, I’ve put my Haskell experience aside to… port luminance into Rust! There are numerous reasons why I decided to jump in and I think it could be interesting for people to know about the differences I’ve been facing while porting the graphics library.

You said Rust?

Yeah, Rust. It’s a strong and static language aiming at system programming. Although it’s an imperative language, it has interesting functional conventions that caught my attention. Because I’m a haskeller and because Rust takes a lot from Haskell, learning it was a piece of cake, even though there are a few concepts I needed a few days to wrap my mind around. Having a strong C++11/14 experience, it wasn’t that hard though.

How does it compare to Haskell?

The first thing that amazed me is the fact that it’s actually not that different from Haskell! Rust has a powerful type system – not as good as Haskell’s but still – and uses immutability as a default semantic for bindings, which is great. For instance, the following is forbidden in Rust and would make rustc – the Rust compiler – freak out:

let a = "foo";
a = "bar"; // wrong; forbidden

Haskell works like that as well. However, you can introduce mutation with the mut keyword:

let mut a = "foo";
a = "bar"; // ok

Mutation should be used only when needed. In Haskell, we have the ST monad, used to introduce local mutation, or more drastically the IO monad. Under the wood, those two monads are actually almost the same type – with different warranties though.

Rust is strict by default while Haskell is lazy. That means that Rust doesn’t know the concept of memory suspensions, or thunks – even though you can create them by hand if you want to. Thus, some algorithms are easier to implement in Haskell thanks to laziness, but some others will destroy your memory if you’re not careful enough – that’s a very common problem in Haskell due to thunks piling up in your stack / heap / whatever as you do extensive lazy computations. While it’s possible to remove those thunks by optimizing a Haskell program – profiling, strictness, etc., Rust doesn’t have that problem because it gives you full access to the memory. And that’s a good thing if you need it. Rust exposes a lot of primitives to work with memory. In contrast with Haskell, it doesn’t have a garbage collector, so you have to handle memory on your own. Well, not really. Rust has several very interesting concepts to handle memory in a very nice way. For instance, objects’ memory is held by scopes – which have lifetimes. RAII is a very well known use of that concept and is important in Rust. You can glue code to your type that will be ran when an instance of that type dies, so that you can clean up memory and scarce resources.

Rust has the concept of lifetimes, used to give names to scopes and specify how long an object reference should live. This is very powerful yet a bit complex to understand in the first place.

I won’t go into comparing the two languages because it would require several articles and a lot of spare time I don’t really have. I’ll stick to what I’d like to tell you: the Rust implementation of luminance.

Porting luminance from Haskell to Rust

The first very interesting aspect of that port is the fact that it originated from a realization while refactoring some of my luminance Haskell code. Although it’s functional, stateless and type-safe, a typical use of luminance doesn’t really require laziness nor a garbage collector. And I don’t like using a tool – read language – like a bazooka. Haskell is the most powerful language ever in terms of abstraction and expressivity over speed ratio, but all of that power comes with an overhead. Even though you’ll find folks around stating that Haskell is pretty okay to code a video game, I think it will never compete with languages that are made to solve real time computations or reactive programming. And don’t get me wrong: I’m sure you can write a decent video game in Haskell – I qualify myself as a Haskeller and I’ve not been writing luminance just for the joy of writing it. However, the way I use Haskell with luminance shouldn’t require all the overhead – and profiling got me right, almost no GC was involved.

So… I looked into Rust and discovered and learned the language in only three days. I think it’s due to the fact that Rust, which is simpler than Haskell in terms of type system features and has almost everything taken from Haskell, is, to me, an imperative Haskell. It’s like having a Haskell minus a few abstraction tools – HKT (but they’ll come soon), GADTs, fundeps, kinds, constraints, etc. – plus a total control of what’s happening. And I like that. A lot. A fucking lot.

Porting luminance to Rust wasn’t hard as a Haskell codebase might map almost directly to Rust. I had to change a few things – for instance, Rust doesn’t have the concept of existential quantification as-is, which is used intensively in the Haskell version of luminance. But most Haskell modules map directly to their respective Rust modules. I changed the architecture of the files to have something clearer. I was working on loose coupling in Haskell for luminance. So I decided to directly introduce loose coupling into the Rust version. And it works like a charm.

So there are, currently, two packages available: luminance, which is the core API, exporting the whole general interface, and luminance-gl, an OpenGL 3.3 backend – though it will contain more backends as the development goes on. The idea is that you need both the dependencies to have access to luminance’s features.

I won’t say much today because I’m working on a demoscene production using luminance. I want it to be a proof that the framework is usable, works and acts as a first true example. Of course, the code will be open-source.

The documentation is not complete yet but I put some effort documenting almost everything. You’ll find both the packages here:

luminance-0.1.0

luminance-gl-0.1.0

I’ll write another article on how to use luminance as soon as possible!

Keep the vibe!

Thursday, February 18, 2016

Pure API vs. IO-bound API for graphics frameworks

Hi! It’s been a while I haven’t posted something here. I haven’t been around for a few weeks and I’ve been missing writing here. I didn’t work that much on luminance or other projects, even though I still provided updates for stackage for my packages. I worked a bit on cheddar and I hope to be able to release it soon!

Although I didn’t add things nor write code at home, I thought a lot about graphics API designs.

Graphics API designs

Pure API

APIs such as lambdacube or GPipe are known to be graphics pure API. That means you don’t have use functions bound to IO. You use some kind of free monads or an AST to represent the code that will run on the target GPU. That pure design brings numerous advantages to us:

  • it’s possible to write the GPU code in a declarative, free and elegant way;
  • because of not being IO-bound, side-effects are reduced, which is good and improves the code safety;
  • one can write GPU code and several interpreters or/and with different technologies, hence a loosely coupled graphics API;
  • we could even imagine serializing GPU code to send it through network, store it in a file or whatever.

Those advantages are nice, but there are also drawbacks:

  • acquiring resources might not be explicit anymore and none knows exactly when sparce resources will be loaded into memory; even though we could use warmup to solve that, warmup doesn’t help much with dynamic scene where some resources are loaded only if the context enables it (like the player being in a given area of the map, in a game, for instance);
  • interfacing! people will have to learn how to use free monads, AST and all that stuff and how to interface it with their own code (FRP, render loops, etc.);
  • performance; even though it should be okay, you’ll still hit a performance overhead by using pure graphics frameworks, because of internals mechanisms used to make it work.

So yeah, a pure graphics framework is very neat, but keep in mind there’s – so far – no proof it actually works, scales nor is usable for a decent high-performance for end-users. It’s the same dilemna as with Conal’s FRP: it’s nice, but we don’t really know whether it works “at large scale and in the real world”.

IO-bound API

Most of the API out there are IO-bound. OpenGL is a famous C API known to be one of the worst one on the level of side-effects and global mutations. Trust me, it’s truly wrong. However, the pure API as mentioned above are based on those impure IO-bound APIs. So we couldn’t do much without them.

There are side effects that are not that bad. For instance, in OpenGL, creating a new buffer is a side-effect: it requires that the CPU tell the GPU “Hey buddy, please create a buffer with that data, and please give me back a handle to it!”. Then the GPU would reply ”No problem pal, here’s your handle!”. This side-effect don’t harm anyone, so we shouldn’t worry about it too much.

However, there are nasty side-effects, like binding resources to the OpenGL context.

So what are advantages of IO-bound designs? Well:

  • simple: indeed, you have handles and side-effects and you have a (too) fine control of the instruction flow;
  • performance, because IO is the naked real-world monad;
  • because IO is the high-order kinded type of any application (think of the main function), an IO API is simple to use in any kind of application;
  • we can use (MonadIO m) => m to add extra flexibility and create interesting constraints.

And drawbacks:

  • IO is very opaque and is not referentially transparent;
  • IO is a dangerous type in which no one has no warranty about what’s going on;
  • one can fuck up everything if they aren’t careful;
  • safety is not enforced as in pure code.

What about luminance’s design?

Since the beginning, luminance has been an API built to be simple, type-safe and stateless.

Type-safe means that all objects you use belong to different type sets and cannot be mixed between each other implicitely – you have to use explicit functions to do so, and it has to be meaningful. For instance, you cannot create a buffer and state that the returned handle is a texture: the type system forbids it, while in OpenGL, almost all objects are in the GLuint set. It’s very confusing and you might end up passing a texture (GLuint) to a function expecting a framebuffer (GLuint). Pretty bad right?

Stateless means that luminance has no state. You don’t have a huge context you have to bind stuff against to make it work. Everything is stored in the objects you use directly and specific context operations are translated into a different workflow so that performance are not destroyed – for instance luminance uses batch rendering so that it performs smart resource bindings.

Lately, I’ve been thinking of all of that. Either turn the API pure or leave it the way it is. I started to implement a pure API using self-recursion. The idea is actually simple. Imagine this GPU type and the once function:

import Control.Arrow ( (***), (&&&) )
import Data.Function ( fix )

newtype GPU f a = GPU { runGPU :: f (a,GPU f a) }

instance (Functor f) => Functor (GPU f) where
  fmap f = GPU . fmap (f *** fmap f) . runGPU

instance (Applicative f) => Applicative (GPU f) where
  pure x = fix $ \g -> GPU $ pure (x,g)
  f <*> a = GPU $ (\(f',fn) (a',an) -> (f' a',fn <*> an)) <$> runGPU f <*> runGPU a

instance (Monad m) => Monad (GPU m) where
  return = pure
  x >>= f = GPU $ runGPU x >>= runGPU . f . fst

once :: (Applicative f) => f a -> GPU f a
once = GPU . fmap (id &&& pure)

We can then build pure values that will have a side-effect for resource acquisition and then hold the same value for ever with the once function:

let buffer = once createBufferIO

-- later, in IO
(_,buffer2) <- runGPU buffer

Above, the type of buffer and buffer2 is GPU IO Buffer. The first call runGPU buffer will execute the once function, calling the createBufferIO IO function and will return buffer2, which just stores a pure Buffer.

Self-recursion is great to implement local states like that and I advise having a look at the Auto type. You can also read my article on netwire, which uses self-recursion a lot.

However, I kinda think that a library should have well defined responsibilities, and building such a pure interface is not the responsibility of luminance because we can have type-safety and a stateless API without wrapping everything in that GPU type. I think that if we want such a pure type, we should add it later on, in a 3D engine or a dedicated framework – and that’s actually what I do for demoscene purposes in another, ultra secret project. ;)

The cool thing with luminance using MonadIO is the fact that it’s very easy to use in any kind of type that developpers want to use in their applications. I really don’t like frameworks which purpose is clearly not flow control that actually enforce flow control and wrapping types! I don’t want to end up with a Luminance type or LuminanceApplication type. It should be simple to use and seamless.

I actually start to think that I did too much about that pure API design idea. The most important part of luminance should be type-safety and statelessness. If one wants a pure API, then they should use FRP frameworks or write their own stuff – with free monads for instance, and it’s actually funny to build!

The next big steps for luminance will be to clean the uniform interfaces which is a bit inconsistent and unfriendly to use with render commands. I’ll let you know.