Hi. Name's Abdi Dahir and this is a place for me to mark the web in w/e strange way I feel.

I have a ton of interests and maybe not enough focus. I will have a little more to show for myself then a fractal patterned orb soon.

1K3tSx6hgz13Vx4A8Rt6pKZWNnBSFLEu3f

on the ground

  • tdot: bahen center
  • brooklyn: outpost
  • sf: another cafe
  • sf: social study
  • vancouver: waves coffee house
  • work: relic

April 1 2024:

Creative power

Short one to vent... creativity is difficult. Voxel art is a limiting media and even here the space feels too large. I am extremly grateful for the work of others as the art I see online has been a consistent source of inspiration for what elements to add to a scene, for how to colour a character and what the models should look like.

For years my work has been based on existing IP or characters I've fallen in love with. Hoping at some point that pattern changes but for now its been fun at least. Being able to skip the "what" and going straight into the "how" seems helpful. Soon i'd like to share an example of that on twitter again with some recent work I've been doing unrelated to the game.

If you are interested, look at my retweets. Generally each post has something in there that has made me think "I want to do that..."


Feb 4 2024:

PNG to texture

I looked into getting png loads and finally managed to get some stuff done for it.

As usual its a little trickier then expected, first png reading is not super free. Reading the format is annoying so looking into a crossplatform library was needed. Thankfully we had something simple that i could link statically, libpng! Using the same mingw crossplatform complilation I updated my makefiles to build in libpng

LDLIBS = --static `/usr/bin/x86_64-w64-mingw32-sdl2-config --static-libs` -ldWHATEVER_OTHER_LIBS -lpng -lz

Note the ordering and the presence of -lz, the zlib compression lib needed for libpng. This turned out to be neccasary for linking to happen.

Once compiled in, reading the png data also turned out to be trickier then expected. libpng has a pretty obtuse API but thanks to FreeCX help here using the library was not too painful.

Finally the bit that actually matters, storing the png data into a texture and just keeping a handle to it was done. With an update to the final shader pass I was able to load a glorius png into the background.

Unfortunately this still did not run on windows, the final gotcha was within the calls to read files. A problem I've managed to run into every time and forget, never forget to specify exactly how to read non text files!!

fopen(file, "rb")

vs

fopen(file, "r")

where neccasary.

With this setup, im hoping to load in larger texture sheets for UI/UX elements... the current engine has so many floating texture handles now I'm extremly curious when I'm going to blow up gpu memory, how many bound textures is too many...

Either way, we move on


Dec 3 2023:

Year End and Community Power

It is that time of year again to reflect and look towards the future. Where is the game at!!!

Unsuprisingly there hasn't been much progress, just slow incremental changes and the usual rendering features. Bit of a guilty pleasure toiling away at random bits of effects... feels weird even calling it a guilty pleasure, this whole thing is for my own enjoyement after all! The guilty feeling of course comes from not having more done or playable. I think an easy target to set at this point is to have something end to end done, a verticle slice. Going from the already built out intro, through the school, to two sets of fights would be enough and the tools are there to do it. The big missing piece of course though is ... the gameplay!! Not much direction there and tbh not too sure how to "decide" on any bit of it. It's been too many years of ideas and my interests in general have changed greatly. RTS -> Tactics -> Action -> FG kind of muddied my brain for what a fun simple game should look like.

If I were to ask what it would take to become more serious about the project, im afraid I won't like the the answer. I think it involes shifting to a community of developers to push me to do more, act more and think more about indie developement. I say this because of how much of an effect the vancouver fg scene has had on my desire to play games. Two years ago I couldn't imagine spending that many hours playing anything, now its just after thought, of course I'll come through to xyz event! etc etc. This effect I see happen with out of town friend groups too, playing games like DnD now, squash or w/e. As a goal for the new year I should look to harness this community power for my other interests as well.

I say afraid cause "community" doesn't match the vision I've had for this indie stuff originaly. I really value independance, seeking a community for developement really doesn't match how I view myself but you know... w/e i guess, tis a new year.

So if youre in Vancouver and know of any indie meetups and for w/e reason found your way here, please reach out on Twitter :) Regardless, i'll likely try to find you soon...

Looking forward to 2024, with how my job is going and how I'm feeling. I'm expecting big changes again in the future!


Nov 18 2023:

Depth and Effects

Recently I did some work on my effects system and produced this:a fake rimlight and trail effect when dashing. I'd like to talk a bit about what I did to get that up.

The effect system for now just takes requests and does an additional render for each request into different render textures (colour AND depth swapped out) using the scene pass' framebuffer. The shader program and framebuffer are unchanged, its literally just another write to different texture targets, with additional uniforms passed in. The textures are all stored on each requesting entities, they are later combined with additional render passes.

With the request fulfilled the postprocess step kicks in and each model starts its pass, grabbing data from w/e textures it knows about. The effects texture is picked up here and used in the passes that render out the unit models to a models texture. The texture is built out in three phases.

First the effects texture is built by processing each models effects texture independatly, adding in additional gameplay effects on top of what was already drawn on the texture via the requested effect draw calls. Things like the white flash when a unit is hit and unit occlusion effects are done here. The effect depth data is made writtable here but not really updated unfortunately. This leads to some issues which I can hopefully resolve later... model data can be drawn behind some other units effects based on depth, ideally model data is always on top of any effect but that data is not prepared yet by the time of this pass

Second the model textures are built by taking the model pass and apply data from these effects. The texture colour data is read and whenever the effect texture has data, its is placed on top. To ensure effect data stays rendered at the correct depth as well, the models depth data is made writable and data is written when appropiate. If theres effects data present and the model has no data, its depth is used and stored in the models depth.

This process of updating depth data while reading it is made possible by this patten.

glEnable(GL_DEPTH_TEST);
glDepthMask(true);
glDepthFunc(GL_ALWAYS);

Its important the depth test pass is active so writes are available but the testing function is always set to pass, allowing us to look at all fragaments.

Finally the composite unit texture is built, taking all the model textures and applying them to one final texture that will eventually be pasted on top of the scene. We use the depth data for all models in order to determine when one fragment should be overriden by another. This means the order we process models does not matter for the final unit texture.

These seperate passes allow a ton of flexibility that we would not be able to get by simply writing to the final scene buffer immediately at the cost of multiple writes. Gameplay can effect rendering independalty of the render system knowing about every gameplay element. The effect system is made responsbile for dealing with gameplay alone and how to itterate on effects is made independant of the broader rendering system which is nice for ... future me.

I'm very happy with the setup, what I want to do next is scene transition effects which means more post processing fun! I'm looking to thinks like pokemon for inspiration.


August 21 2023:

Minor Update

I'm glad no one held their breath!

My continued time with FGC here at vsb has consumed most of my free weekends and its been too much fun to really stop! Non stop new releases are grabbing my attention and the grind itself has always been super captivating. If you are interested in this, please check out the vancouver street battle streams!

Besides gaming I have made time to do engine work occasionally, the game itself however seems further and further out! Not entirely sure how to solve battle design issues I've been running into. I'm seriously considering making something smaller to just get it out there but even that is hard! How do people come up with game ideas really??? I'm considering redoing an older Tron project in the new engine but even that is not super exciting to me. A simple arcade like game however is definetly the direction I'd like to go!

Summer is coming to end, hoping to find inspiration again soon, but until then i'll continue to tool around and see what comes of it!

I'm glad I put out this website tool though look how much use its getting :)


December 26 2022:

Site Maintenance and Tooling

I'm writing this late into the night, after finally completing it. At the request of my sister i've written tooling to generate blog posts based on what I've done here. After writing the scripts I did the slow tedious work of migrating everything I've written so far on this site to be generated using the new tooling. You can check out the project here on my github. Its simply a pair of bash scripts that takes text files and applies html templates to them to generate webpages. The sidebar and header are templated out, the contents of these sections are written once and applied to every "blog" entry found in the content directory. After the pages are all spit out, another script gets run that moves it all over to the webserver im running this on.

Theres a few bits missing, I need to both nuke the directory on the server before beginning the copy and I need to automate copying the non-generated files like js and game files (which reminds me i never did link the webgl voxel engine i made did I?) over to the server as well. They are a one time copy but for completeness the tooling should cover it. Especially if I plan on nuking the directory

Also a bit tedious but I cannot get the server to trust my public key, I'll probably need to update the server distro, w/e its running but who wants to do that.

With this I might actually blog at a regular pace :O

I wouldn't hold my breath though...


Jan 8 2022:

Learning to polish

I'm not apologizing, I'll just try to do better then yesterday with how often I write.

Covid is running on year 3 now and its not been the best for my work ethic. However I'm incredibly happy with how my art has improved over the last year.

The main thing is detailing, leaving blank walls, floors, solid colours etc I thought looked fine; gave it a toy box/doll house feel I thought. Now though I feel more then anything it just comes off as unfinished. If I were generating content programmatically I think I could get away with this but I am not. Hell if the game wasnt based on an IP I think I could of gotten away with it.

The detailing I will try to focus on for any new scenes and characters I make are:

1. No solid colours. Break it up with blotches of slightly saturated version of w/e colour.

2. Avoid large flat areas, delete a voxel or two, add more objects.

3. Walls and floors must have a pattern, but that pattern cannot take up the whole area. Solid is fine to accent.

My latest posts on twitter I think show the imporvement. Because of this too i'm more confident about rendering at higher resolutions instead of going for the 480x320 pixelated look. I think its better for it overall.

I'm more deadset on making a medabots game now too, even without the license I'm at the point where I'm willing to get this game out as free downloaded if needed. Maybe open up a patreon to earn something off it. I want to finish this, at any cost!

Until my next post, see you here or on guilty gear


June 9 2021:

Getting back to blogging

Hello yes this site is still active ...

I haven't made time to update this site in a long time but I still really love it. Having a record of thoughts from now 7 years ago is so inspiring and sometimes pretty cringey... feels good to share it either way. Still love the aesthetic though and of course am still actively developing the game engine. However I mostly update that on twitter now. Please check out images and gifs there.

Since the last blog update, I've switched jobs once more and managed to get a sick work schedule a year in. This gave me so much time to really build out the game. Its honestly come along way from its humble roots as a voxel render. I think i'm a bit addicted to world building/story telling though so I'm hoping to make it a decently long (10hrs?) rpg in the medabots universe. I love the characters and maybe I can get the license, if not, generic mini-robot-battler go! I'm still designing as I make tech so nothing is really set in stone but I'm alot more structured about it all now though and the number of things the game can be gets smaller and smaller each day which is a great sign if I ever want to finish haha.

I'm on and off with how heads down I am with working on this normally, but covid this year has made it a bit rocky. Something about staying in to do work when I just spent all day working from home on my official job isnt really too enticing ...

You will hear from me again ... soon ... probably!


Dec 17 2017:

Improved engine and new flames

It's been a crazy year, I've moved to Vancouver, jumped jobs twice and am more deadset then ever on building this thing out

I've learned alot at artillery and maxis, mostly about engine design and entity component systems. I've trashed most of my old code and rebuilt

the voxel engine as a proper game engine. I've put in alot of work on the lua bindings and now have a clean interface between lua code and c++ code

that's all handled behind the scenes my the engine. All C++ components and events can be have lua code bound to them by simply adding handlers such as OnEventName(arg1, ... argN) with no fancy bridge code other then a single macro, DEFINE_EVENT(onEventName, argType1 ... argTypeN). Theres also macros for exposing any memeber variable of a class to lua allowing for the lua file to both read and set the value. Its cool. I'll show it off here at somepoint.

Thats all nice and everything but I still haven't built a new game since 2014 and I want that to change this coming year. I'm working fast towards a prototype trying to minimize distractions (Its so hard to not work on fun rendering stuff). My current fulltime job has great work life balance so things are looking up.

You will hear from me again soon!


April 30 2016:

UI and text rendering woes

Work work work, but I've made some time to look at the engine a bit more.

So I've been looking at performance and started measuring specific parts of the rendering pipeline. I haven't been getting good FPS lately and I mostly assumed it was due to the voxel code, hence all the threading work. What I forgot to pay attention to was the text rendering code I slapped in! It alone took up two thirds of my rendering time! It obviously needed to go!

I've been using FreeType following this very helpful guide which got me to a point where I could simply write text to the screen. However due to the lack of caching and constant texture generation, my FPS took a serious hit. At the time, I was still at 60FPS (I normally have vsync on blah) so it went unnoticed and performance was not my first priority.

I had the option of figuring out how to improve on this or move away from it entirely and finally start looking at some UI libs. I decided to go with the latter and have started using ImGui a very self contained library that does basically everything I need currently. My FPS on my windows machine is now at something ridiculous like 1200FPS in top down view and around 500FPS in free cam.

You can view the UI Demo here.

Obviously I'm pretty happy with the switch, I'm not too sure how practical this lib will be in the long run but I'm tempted to find out :p


January 3 2016:

Persistent Data && Threading Opts

So as you may of noticed, I took a bit of a break due to the new job. Work is exciting and consuming 110% of my time but since I've been on break, I obviously took the time to tackle more of the game.

So although I did do most of the multithreading work in July, I did not take the time to debug it thourghly or truly break down all work to be exclusively local to chunks. i.e lighting was not threaded. More importantly, rendering required all threads to stop before I could render out a chunk. I ran the risk of corrupting data being bound to the gpu. Fortunately I banged my head against my keyboard to sort that out using the minimal amount of blocking code. Threads no longer need to be barrier synchornized while I write to GPU memory.

The solution mostly boiled to keeping two states, the currently bound state of the chunk, and the current active state of the chunk. I maintained seperate values for things such as block count, x,y,z regions start and length in the buffer etc. One for the version that is currently bound to gpu memory and one for the version that is not. This does not mean chunk data was duplicated 100% of the time, it just means the vertex data thats normally just on the gpu and sometimes on the stack was kept around longer then before and was only cleared at the time of the next glBindBuffer call. When that call is made I copy the current state synchronisly (just few integers needed) and release locks allowing any thread to attempt to reupdate the vertex data if need be. This allows me to render and recreate meshes at the same time.

Lighting basically required a rewrite of the floodfill algorithm which I originally got from the guys making Seed of Andromeda. The change mostly revolved around the addition of a lock and local queues.

I made a quick demo earlier to demonstrate that portion of the update.

After feeling pretty confident about my threading skills I decided to crush that confidence by attempting to thread reading/writing chunks.I did manage to get this all working but my self-esteem did not hold out unfortunately. Currently I am writing out each chunk to its own file based on it's position. Since I reuse chunks as I move around the world, assigning a chunk an id and using that as the file name is not helpful. Position's worked fine as long as I made sure to synchronize read and writes to a chunk file. Otherwise I run the risk of overwriting file with chunk data of chunk that was recently reassigned the conflict creating position.

The synchronization between the update calls and the read/write calls was mostly a freebie due to how I organized chunks that required unloading. If a chunk is in the unloading queue, it cannot be repositioned or put into the update queue. Only chunks stored in a free queue can be reinited.

Thanks to SDL's SDL_GetPrefPath(orgname, dirname) function, there was very little os specific code necessary.That call provided the correct directory to write to, the only extra work was making a safe mkdir call which I did like so.

int mkdir_safe(const char* path) {
#ifdef __linux__
  return mkdir(path, S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH);
#else
  return mkdir(path);
#endif
}

Please excuse the laziness, ideally the #ifdef would be defined for windows and default to the posix mkdir call, however I did not want to enumerate the possible windows/windows compiler flags

Finally a point on the format, currently i'm writing out the chunk byte for byte, so each chunk file is always 32KB. I intend to run encode the files shortly. The next step will be to organize multiple chunks into a single file, ala minecrafts region files. I do not envision that being too complicated given the right amount of synchronization.

BUT OMG IT WORKS!!!

Yea persistence, woo! The next thing I'd like to tackle is building out an entity/component system. Work has inspired me to build out a proper game engine. Having a clear separation between engine and game code at work allows everyone to make a tremendous amount of change on the fly. I really want that. I will have it!


July 29 2015:

Cross compilation with mingw

So first, I'd like start out by thanking all the compiler people out there. Honestly, you don't get enough love. You're all the best.

My experience with emscripten has been mostly great, ignoring the SDL2 hiccups I ran into, porting to the web was surprisingly easy. I feared the worst for mingw though, so I originally attempted to port the code over to Visual Studio. ... which was incredibly painful.

Turns out mingw is not so bad, as long as you are on Arch Linux! Every library that you would like cross compiled has been packaged up nicely for you in the AUR. The only thing you need to do is include them. I did this and boom my code ran on ...

Winnnnddoowwws.

The only tricky bit is sorting out the Makefile required. I've attached the one I used which will hook up your SDL2, GLEW, freetype GLM and lua dependancies. I installed each one of this packages again using the AUR mingw-w64-* version. I believe 64 bit is required to get C++ threading to work out of the box. Either way, you can pick to build a 32 bit version simply by changing the architecture in the Makefile from x86_64 to say i686.


# Build tool
CC = /usr/bin/x86_64-w64-mingw32-gcc
CXX = /usr/bin/x86_64-w64-mingw32-g++
# Build flags
CPPFLAGS = -O3 -std=c++11 -Wall -pthread -DGLEW_STATIC
# Includes
CPPFLAGS += -I/usr/x86_64-w64-mingw32/include/ -I/usr/x86_64-w64-mingw32/include/freetype2/
# LD Flags
LDFLAGS = -L/usr/x86_64-w64-mingw32/lib
# LD Libs
LDLIBS = -static `/usr/bin/x86_64-w64-mingw32-sdl2-config --libs` -ldinput8 -ldxguid -ldxerr8 -luser32 -lgdi32 -lwinmm -limm32 -lole32 -loleaut32 -lshell32 -lversion -luuid -lglew32 -lopengl32 -lm -llua -lfreetype
# Source
main: yourcode.o
# Build Souurce
all: main

I'm a bit of a Makefile noob, so please excuse any strangeness. (Like how i'm creating a 'exe' main without the proper extension


A few interesting things about this Makefile:
1. The mingw32/include/ section deals with GLM and Lua automatically as I installed both of them via AUR so they were added into the folder correctly. Even though GLM doesn't need to compiled, Its more convient to just have it included the same way the other libs are
2. -DGLEW_STATIC and -static flag are REQUIRED, mingw is more then happy to dynamically link to libraries, this forces SDL2 and GLEW to compile statically
3. I have no idea what the libs after the sdl config to -lglew32 are, but I do know that SDL2 will not link without them. They seem to have something to do with window's keyboard handling?

I also put in a TOOOON of working making the terrain infinite. I would love to dedicate a blog post to it but I don't think I'm done with it just yet.I would love for it to be faster, but hey ... looks pretty no?

Really though, I intend to make the game ... soon. Next post HAS to be update game content.


July 16 2015:

Multithreading in OpenGL

So whenever I read posts by voxel engine writters that list features they've implemented, Multithreading is almost always listed as a simple bullet point. They then go on to talk about meshing problems, LOD etc. There's very little discussion on how they implented it. I'm gonna try to change that cause damn does it ever suck to multithread code with OpenGL being single thread only.

So I lied a bit, OpenGL does support multithreading with shared contexts but as far as I can tell, the API to do it is windowing system specific, i.e OS specific. This wasn't really a path I wanted to go down. Thankfully, its not that big of deal, it just lead to a very important condition that I needed to mantain.

ALL glXXXXX Methods must happen on thread that inited the context!

This was a bit unfortunate, I was originally planning on threading my mesh generation code directly, i,e no changes to its implementation. It originally created a fixed size buffer on the stack and stored all the mesh vertecies. This was then immediately bound to a chunks VBO and discarded when the function returned. The last step, binding to the VBO would not be possible anymore if this were running in a seperate thread. This ment that the mesh could not be stored on the stack, which then lead to the abstraction of the mesh and heap allocation for its storage. The code was updated so that the chunk had a reference to a mesh object, it would write to it when the mesh generation function was called and cleared after the main thread bound its data to a VBO.

Now in order to define what work a thread would do and what the thread would do after its completion. I created a threadPool class with two queues. A workQueue and a responseQueue. The workQueue would take a task structure which contained data to be worked on and a C++ lambda function (yea! lambdas!) to execute on this data. The responseQueue would take the result of the lambda function. The two queues would be synchronized with their own mutex and wait channel. I'm actually very proud of this threadPool class, I'll be posting it on github soon, its just a header and it provides basically everything I need for controlling the threadPools work flow, pausing work, adding work etc.

So when it comes time to render the scene, I pause all active threads and I drain the responseQueue of its completed meshes, binding them all. I compute visible chunks and for any chunks that need updating, I push it on to the queue with a lambda function executing its mesh generation method. After the scene is rendered I resume all threads! Boom multithreading!

... Except for the work order. So if you were to just keep pushing chunks that needed updating onto the queue, your worker threads would not be able to keep up, causing them to continue to work on chunks that arent visible to the player anymore. You could use a stack instead of a queue to deal with this issue. On top of work order, at least for me, I had issues where the same chunk was on the workQueue multiple times. I do not do any checking to see if the chunk exists in the queue or if it still needs updating. I prefered to just wipe the workQueue during the pause and drain response phase and mantain the queues FIFO ordering instead.

After getting all that sorted out, I gained level of detail for free. I can set the 'sample' rate in my mesh generator (It just sets the increment step in the various for loops) based on distance away from the player and push the chunk on to the workQueue if the sample rate changed.

Of course, this leads to issues in areas where chunks at different LOD meet. I got around this by extending each chunks mesh by one sample step. It works pretty well, there's still times where holes are visible but I enjoy the effect enough to not care.

Here's a vid of it.

I swear I'll get back to the game part of this reaaalll soon.


July 10 2015:

Voxel Segmentation

I've been working on how enemies should be fought in my game and I've decided that in order to best take advantage of the engine, enemies should die by being broken apart. i.e whenever an enemies body becomes disjoint, all or one disjoint set of their body should be destroyed. Sounds fun buuuut its been a pain getting in order. This post will discuss what I've learned about set segmentation for voxel data.

First my assumptions:
1. Enemy starts off completely connected
2. Enemy loses at most one voxel at a time
3. Diagonal voxels are not considered connected

Nothing crazy but should be stated. With this I began building out a soloution. I initially came to the conclusion that the only way a set could ever become disjoint is if the voxel currently being removed was one of only three voxels in a neighbourhood around the voxel. The idea was, if we assume up untill this point that the enemy was fully connected (i.e there exists a curve in between any two points on the enemy such that every point on this curve is contained in a voxel of the enemy) then removing a single voxel is only destructive if it was the only point between it and its two neighbours!

So now the problem became this: If a voxel that just got removed fit the above description, it being one of three voxels in a neighbourhood. How do I know if it created a disjoint set? How do I know that there doesn't exist another path between the two voxels neighbouring this guy that I just orphaned?

Does there exist another path between these two voxels?

Sounds like a path finding problem, sounds like I'm writing A*.

So I did this.

Attached is a video of it in action. After determining if removing a voxel creates two disjoints sets. I flood fill the two sets using the orphaned points as seeds and paint them red or blue. Since my player character deletes multiple voxels at a time, when a disjoint set happens to be small, it may be deleted immediately by the characters missile. Hence the case where the enemy is painted fully blue or red.

Seems to work right? WRONG!

You may of guessed this but the assumption of there being only two orphaned neighbours possible is based on the assumption that removing a voxel can create at most two disjoint sets. This is incorrect. At most 6 orphaned neighbours are possible creating 6 disjoint sets IF the removed voxel was say the center of some star like enemy. Imagine 1x1 path shooting out of every face of this to-be-removed voxel. The set was originally connected, however this one little piece was holding it all together. A* on just two voxels is not enough ...

I'll post an update when I sort this out.

In other news I have lua bindings now wooot :p, ill do an update on that too!


June 28 2015:

It begins...

So I've started working on my third attempt to make a game outof my voxel engine. I believe in this one alot so I'm going to start documenting some of its developement.

I'm making a voxel space shooter crossed with megaman. Think megaman bosses but in space and with destructiable bodies.

Destructiable enviornments are pretty straightforward in a voxel engine, the volume data for an enviornment is static so intersections are pretty trivial. Chunks are generally axis alligned too, meaning no funky transformations are needed to check point collisions. My engine currently organizes fixed sized chunks in a octree, so when I want to shoot one of my asteriods I query the octree picking up chunks along the ray as I pass em and tracing through them to look for collisions.

But what happens when these chunks are rotating around and moving? For my voxelized enemies this will be pretty common so my first task was to sort out intersections with arbitrary positioned/alligned chunks. My current soloution, like most raytracing renders (mine is not fyi) is to transform any ray into the chunks object space. This can be done my multiplying the incoming ray by the inverse of that chunks model matrix (its translations, rotations etc.) and checking to see if the ray crosses then. Its a simple trick and solves my problem.

I use this trick as well to determine which faces of the enemy to render. I transform the cameras position into the enemy chunks object space which allows me to simply check to see if the camera is infront of or behind a voxel face using the extents of the chunk. This would not of been possible in world space where a face could of been unalligned with the axes.


I've uploaded a demo video to show how it works.


Hopefully I'll have more to show you in the future.