EEVEE has been evolving constantly since its introduction in Blender 2.80. The goal has been to make it viable both for asset creation and final rendering, and to support a wide range of workflows. However, thanks to the latest hardware innovations, many new techniques have became viable, and EEVEE can take advantage of them.
A new beginning
For the Blender 3.x series EEVEE’s core architecture will be overhauled to make a solid base for the many new features to come. The following are the main motivations for this restructuring.
Render Passes and Real-Time Compositor
A core motivation for the planned changes to EEVEE’s architecture is the possibility to output all render passes efficiently. The architecture is centered around a decoupling of lighting passes, but will still be fully optimized for more general use cases. Efficient output of render passes is a must in order to use the planned real-time viewport compositor at its full potential. This new paradigm will also speed up the rendering process when many passes are enabled, and all render passes will be supported, including AOVs.
Screen Space Global Illumination
The second motivation for the rewrite was the Screen Space Global Illumination (SSGI) prototype by Jon Henry Marvin Faltis (also known as 0451). Although SSGI is not necessarily a good fit for the wide range of EEVEE’s supported workflows, strong community interest in the prototype shows that there is a demand for a more straightforward global illumination workflow. With SSGI, bounce lighting can be previewed without baking and is applied to all BSDF nodes. This means faster iteration time and support for light bounces from dynamic objects.
BlenderのEeveeでのScreen Space Global Illuminationアドオン。ネイティブでSSGIがレンダー設定に組み込まれたビルドのバージョンが出てた。久しぶりにいじったけどSSGI使うとリアルタイムで大分いい感じに見えて楽しい。 pic.twitter.com/7jrbq6LXD7
A demo of the SSGI prototype branch by Hirokazu Yokohara.
Hardware Ray-Tracing
Supporting SSGI brings ray-tracing support a step closer. The new architecture will make the addition of hardware ray-tracing much easier in the future. Hardware ray-tracing will fix most limitations of screen space techniques, improve refraction, shadows … the list goes on. However, implementing all of these will not be as easy as it sounds. EEVEE will use ray-tracing to fix some big limitations, but there are no plans to make EEVEE a full-blown ray-tracer. For instance, ray-tracing won’t be used for subsurface scattering (not in the traditional way at least).
Note that hardware ray-tracing is not a target for the 3.0 release but one of the motivations for the overhaul.
Volumetric Improvements
Volume rendering is also something that should be improved with this update. In the current version of EEVEE released in 2.93, object volume materials are applied to the object’s entire bounding box and are still constrained to a rather coarse view-aligned volume grid.
The goal is to improve that by providing a volume rendering method that evaluates volumes in object space. This means volumes will be evaluated individually and per pixel allowing for high fidelity volumetric effects — without paying the cost of the unified volumetric approach (the current method used by EEVEE). This would also make volume shadow casting possible for those objects.
These changes would make the following use cases trivial to render, compared to current implementation where it would be impossible to get this quality (examples rendered using Cycles).
Water surface with absorption volume, rendered with CyclesHighly detailed smoke simulation with shadows projected onto the scene, rendered with Cycles
New shader execution model
The current shader execution is straightforward and executes all BSDF nodes in a material node graph. However, this can become very costly with complex materials with many BSDF nodes, leading to very long shader compilation times.
The approach in the rewrite will sample one BSDF at random, only paying the cost of a single evaluation. A nice consequence of this is that all BSDF nodes will be treated equally with all effects applied (Subsurface Scattering, Screen Space Reflections, etc…).
And many more …
There is a huge list of features that are just waiting for this cleaner code base. That includes Grease Pencil support, vertex displacement, panoramic camera support, faster shader compilation, light linking… etc.
This is also an opportunity to rewrite the whole engine in C++. Hopefully this will help external contributions and encourage experimentation.
Conclusion
This endeavor has already started in the eevee-rewrite branch. Some features listed in this post are actually already implemented, however at this point it is to early for testing. Soon there will be experimental builds for developers to test. At that time the development team will create threads on devtalk.blender.org to follow feature-specific development.
The target plan for Blender 3.0 is to have all features back with the addition of some new features. The final set of feature targets for 3.0 is not yet decided.
The Asset Browser will be a big milestone for Blender. A lot of effort goes into landing its first iteration for Blender 3.0. Here’s a demo, some design background and a short introduction of the recently held the asset workshop.
There already are two posts covering aspects of the project:
Asset Manager: An early, high-level introduction to some core ideas of the design.
Pose Library v2.0: The new pose library design built on top of the Asset Browser.
Workflow Demo
Let’s start with a demo! The sections following that give a bit more background on the design then. Keep in mind that what you’ll see still needs lots of polishing. Many design improvements and features are yet to come.
The workflow can be separated into four parts: Creating, editing, using and sharing asset libraries. It’s not the “correct” order, but we’ll start with the best part: Using.
Using
Using asset libraries should really be an enjoyable, effortless experience. Features that help place the asset in the scene are important. Here’s a work in progress implementation of object placement with bounding-box based snapping:
Using assets from an external asset library.
There are plans to extend this so you can keep transforming objects (or collections!) after dragging them in. In fact, this can become a general transform tool in Blender, it’s useful for more than just asset dragging.
Other kinds of assets, like poses or brushes, you wouldn’t drag into the scene but apply with a double click. Plus, there are plans for “generative” assets, which allow you to paint geometry or particles into the scene. But that’s just a little teaser.
Creating
Okay, one step back. Before using an asset library, you have to create it. A Blender data-block can become part of an asset library by using the Mark Asset operator, which is available in multiple places:
When using Mark Asset, an automatic preview is generated. Designs for more customizable previews are being discussed too.
The advantage of using the Outliner is that it allows turning multiple data-blocks into assets, as shown in the video.
Editing
Assets have metadata, which can be edited from the sidebar:
Note that the Asset Browser in the video shows the Current File asset library. This library is special, as it is the only asset library in which assets can be edited. So, to edit the metadata of an asset, or in fact the asset itself, you have to open the .blend file it is stored in. As usual, the file should be saved in order to make the changes available to other Blenders.
Having to find the right blend file isn’t always that convenient, so there is a new operator that automates that. Right-click on an asset (or select & press W if using right-click select), and choose “Open Blend File”. It starts a new Blender, opens the .blend file containing the asset, waits for you to do the edits, and after you closed the new Blender (don’t forget to save!) the Asset Browser is refreshed automatically.
Edit assets from an external asset library, using the “Open Blend File” operator.
Sharing
Once you have created some assets you may want to put them into an asset library, which can be accessed from any project. Such a custom asset library can simply be “mounted” in the Preferences. Just provide a name and a path on your disk, and Blender will make any assets it finds there available in the Asset Browser.
Other kinds of asset libraries are planned, like project asset libraries and dynamically created ones from online services. The following sections give more insight on that.
Some Background
What Are We Building?
The project is about building an Asset Browser, not an Asset Manager. Asset management is a huge topic, and even though they’re both considered some form of “asset management”, the functionality needed for an online asset market differs quite a bit from the functionality needed for a production pipeline. It’s not exactly breaking news that production pipelines are a complex topic too. While there is some standard technology nowadays (like Pixar’s USD), there are still big differences from studio to studio. They are just too different in what kind of work they do, how they do it and in turn, what trade-offs in technology they are willing to accept over others.
Blender should help studios get a better pipeline; it should not limit or dictate what to do.
Because of this, the project doesn’t focus on asset management tools, but on the following: A streamlined, extensible UI for creating, editing, using and sharing asset libraries. This is the Asset Browser. Below is a section that describes how pipelines and online services can make good use of this as well.
The project also covers three things besides the Asset Browser itself:
Asset View: A mini-version of the Asset Browser that can be displayed anywhere in the UI, e.g. in sidebars, popups and as part of data-block selectors:
Poses available in the 3D Viewport side-panel
Mock-up of a material selector that includes assets from a library
Asset System: An API for developers to register, access and display assets for Asset Browsers and Asset Views to use. Accessible via the Blender Python API for add-ons, as well as a C++ interface for Blender itself to use internally.
Asset Bundle: An asset library that will be either bundled with Blender or available as separate download on blender.org. The aim is to help artists getting started with Blender quickly; it’s explained further below.
Lastly, we want to build something we can be proud of! All efforts will mean nothing if the outcome is not something artists enjoy using. Quality standards are high, it will be worth the struggle of the developers.
Current Scope
The current implementation is focused on a specific use-case: Local user asset libraries. That is, asset libraries for Blender data-blocks that an artist stores on the hard drive, accessible via the Asset Browser. So for example, easy reuse of objects, base-meshes for sculpting, materials, poses, and more.
At this point, the artist is the asset manager. It is the artist’s responsibility to manage how and where assets are stored. Blender doesn’t try to be smart and do work for the artist. The core design needs to be reliable and the best way to do that is by keeping it simple. We can always add helpers, like operators that save the artist some work, but these are mere additions to a simple, reliable core design.
Probably the most obvious design decision coming from this, is that Blender doesn’t support “pushing” assets into an external library. To add an asset to an asset library, you have to open a .blend file inside the library, create the asset in there and save the file inside the library. Blender does not copy the created asset to an external file when using Mark Asset (as shown in the Create section above). It keeps the asset in the current file.
The same applies to editing an existing asset: Open the .blend file containing the asset, do the edits there and save the file. Assets are just Blender data, so they can be edited in exactly the same way you’re used to. Again, we can still add helpers to automate this process. At this point we focus on the foundations, though.
Design Basics
Above’s demo showed how the Asset Browser is used in practice. Here is a recap of the design fundamentals seen there.
A couple of relevant definitions:
Current file: the currently open file, where work is happening.
Asset: Data-block with meaning.
Asset file: Any .blend file containing assets.
Asset library: An entry point to one or more asset files.
Data-block library: A pointer to a .blend file, dynamically linking content into the current File.
Blender sees assets as metadata enhanced data-blocks. Most data-blocks in Blender (objects, lights, materials) can already be extended with asset metadata and become assets. The creation of asset metadata happens in the current file.
A currently open .blend file with regular data-blocks and assets.
In the example below, the Suzanne object in the suzanne.blend file has been turned into an asset.
Example file: suzanne.blend. The right hand side shows the same .blend in a more compact way, which the following graphics will use too.
Multiple asset files can be collected into an asset library, which can be “mounted” in Blender’s Preferences. This will make all assets contained in the library available in any Blender instance (assuming the changed Preferences are saved).
With any Blender instance having access to asset libraries, it is possible to append or link assets into the current file.
The Asset Browser uses the regular link and append functionality for Blender data-block assets.
Preset assets (such as poses and brushes) would be applied, not linked or appended.
blender.org Asset Bundle
One important goal of the asset project is an official asset bundle. Such a thing can be a tremendous help for beginners and is a big step for Blender 101. Even for experienced artists it can be useful to have a bunch of assets ready for quick set mock-ups. An important principle is that the bundle shouldn’t only be there for the sake of being there – it should also be a way to learn. You should be able to pick apart assets, and look into how they are created. And that with assets created by some of the greatest Blender artists around.
The bundle is going to be more of a “getting started” kit, it’s not supposed to be a fully fledged asset library. For that there are still plenty of 3rd party online services available.
This shouldn’t lead to drastic increases in the Blender download size. Therefore the bundle will likely become available as a separate download, or we provide a way to browse them online (in accordance with our privacy and internet access principles).
Pipeline and 3rd Party Asset Service Integration
Typically, in a professional environment there is some kind of server side software, which does some basic asset management tasks. This could be an asset tracer like Attract or Kitsu include; it can also be an online asset service like the Blender Cloud asset library, or BlenderKit. It is connected to some kind of asset database.
Blender knows nothing about the server software, or the database. How are assets stored there? What kind of database is it (SQL? SVN? Dropbox? …). However, the asset service can provide an Add-on as a bridge from Blender to its servers:
Blender add-on for the Asset Service. *
In fact, Blender doesn’t even need to know details about the database. Basically it just needs a list of asset names, maybe preview images, and some functions (callbacks) to be executed whenever a specific asset is dropped into editors or other actions are executed (like deleting an asset). So an add-on can connect Blender to some asset service by providing some metadata and operations, with which Blender can build an interactive view into the database:
Blender creates a view into the asset database through the metadata and operations provided by the add-on. *
The asset system needs to support some kind of “virtual” asset libraries (asset libraries not stored on the hard drive) and asynchronous loading of assets into Blender. Assets can already store custom properties in their metadata. That way add-ons can register own metadata and (optionally) display that in the sidebar:
Asset Browser sidebar showing the asset metadata. Add-ons can add own metadata fields with custom properties.
Blender should provide some kind of reference add-on that does this and can be used as a base by other studios and services. This may also just be Blender Studio add-ons for browsing Blender Cloud assets and for pipeline management.
* Graphics include Font Awesome icons: Font Awesome Free 5.15.3 by @fontawesome – https://fontawesome.com License – (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
The Asset Workshop
It is time to take the project to the next phase. That means:
Evaluate and improve the design, for the 3.0 release and beyond, with all people involved and a number of stakeholders.
Document designs, sign off technical specifications.
Define targets for the 3.0 release.
Implement remaining 3.0 targets, get existing implementation ready for a release.
This new phase started with the four day Asset Browser workshop, which took place end of May. It was done remotely, allowing more stakeholders to join and to drop in and out whenever needed.
Attendees (on and off) were:
Andrew Peel
Bastien Montagne
Brecht Van Lommel
Francesco Siddi
Julian Eisel
Julien Kaspar
Pablo Dobarro
Pablo Vazquez
Sybren Stüvel
Ton Roosendaal
Vilem Duha
William Reynish
A number of documents were created and notes were taken throughout the workshop. The outcomes will be presented in a separate blog post soon.
This should be enough to give a good idea about the Asset Browser project as it stands. The upcoming workshop post will expand on this to give an idea of where we want to go from here. It’s just too complex to cover the entire project in a single post.
A lot of effort is going into the project, especially to get a good first version into Blender 3.0. Many foundations seem to be there now – we are confident in them – from here on it’s a matter of polishing and expanding.
Pablo Dobarro shares an outline of the design for a new modern asset creation pipeline to be developed during the following years.
During the past months I’ve been working on the design of what I call the “Asset Creation Pipeline”. This project will tackle all Blender functionality related to how you create characters, props or environments with Blender. Here we will refer as asset any object that is going to be rendered in a final scene, not the asset definition of a datablock used by the Asset Browser project. The goal is to have a design and a technical implementation plan on how to tackle long standing limitations of Blender like painting or retopology, making a modern design ready for the years to come.
This post outlines the more general ideas of the design, without going into any details on how the implementation or the final product will look like. More detailed documents about the whole design and implementation are being worked on and they will be published soon.
Blender is a software that was always designed for interactivity, and most of its technical innovations were done in this field. This means features like Cycles rendering directly in the viewport with all of the scene contents as they are being edited, EEVEE, modifiers and geometry nodes and the redo panel. We can also mention planned projects that still did not happen such as the full interactive mode or the real time viewport compositor.
There is another way of approaching a software design, which is prioritizing handling any arbitrarily big amount of data. The main selling point of these software is that it can actually render and edit the data, leaving interactivity as a “nice to have” feature when it is technically possible to implement. So, when you are designing for handling any data size, you can’t assume that the tools will just scale in performance to handle the data. Software designed like this will try to make sure that no matter how many textures, video files or polygons you want to edit, you will always be able to do it in some (usually, not interactive) way.
The core concept of the new asset creation pipeline design is embracing Blender’s interactive design instead of trying to fit a different workflow inside it.
So, development and new technical innovations regarding the asset creation workflow will focus on having the most advanced real time interaction possible instead of handling large amounts of data. This means that performance will still improve (new features will need performance in order to keep the real time interaction working), but the features and code design won’t be targeting handling the highest possible poly count or the largest possible UDIM data set. The focus on performance won’t be on how high the vertex count in Sculpt Mode can be, but how fast Blender can deform a mesh, evaluate a geometry node network on top of it and render it with PBR shading and lighting.
Clay brushes working with EEVEE enabled (performance prototype). Supporting sculpting tools while using a fully featured render engines is one of Blender’s strengths. Improving that workflow is one of the goals of this project.
Focusing on this new design will allow Blender to have the best possible version of features that will properly take advantage of Blender’s strengths as an entire software package. This means things like:
The most advanced deformation tools to control the shapes of base meshes, which can be used in combination with procedural shaders and geometry node networks for further non-destructive detailing of the assets. Current tools like the Pose, Boundary and Cloth brushes are poorly implemented in master due to handling the legacy sculpt data types. Addressing these limitations will make them work as they should.
The best possible version of Keymesh in order to combine fully rigged and stop motion animation in the same scene.
A fully customizable painting brush engine, allowing the support for procedural painting brushes for textures, concept art and illustration.
The ability to use advanced painting tools to control how procedural PBR materials or procedural geometry effects are applied to the surface of an asset, manipulating surface information stored in mesh elements and textures that can control both masks or general attributes.
Multi data type tools, the same brush will be able to deform meshes, volume levels sets, curves, Grease Pencil strokes or displacement vectors stored in a texture without the need for baking them from a mesh.
Blender allows tweaking the shape, details and surface of objects using different system that interact with each other, providing real time feedback and non destructive editing.
Handling High-poly
We also know that handling large amounts of data is important for some studio pipelines. For those tasks, the plan is to handle the data from a separate editor that does not interfere with the rest of Blender interactive workflow. When it comes to meshes, this can come as a separate editor with its own viewport optimized only for rendering as many polygons as possible and non real time mesh processing operations. This will keep the high poly mesh data isolated from the rest of the scene, making the tools, real time viewports and the rest of the features perform as they should. Having this kind of data into its own container will also help with features like streaming directly to render engines without affecting the performance of the scene.
Modes
In order to fit all planned features of the new design, some bigger changes have to be made to Blender in order to properly organize the new workflow. Among other changes, this means that the modes and their roles have to be redefined. Modes will contain all features that have a common purpose, regardless of the target data type or workflow stage. Workspaces will be responsible for configuring the modes so they can be used for a particular task. This will allow handling a much higher level of tool complexity and flexibility when defining custom workflows.
These are the proposed modes for all object types, describing their functional purpose in the pipeline. Note that the naming of the modes is not final, but their design and intended purpose on the workflow are:
Object: Manages objects in the scene and their properties.
Freeform: Controls the base shape of organic objects.
CAD: Controls the base shape of hard surface and mechanical objects.
Paint: Controls the base surface information of objects.
Layout/Topology: Prepares the data for procedural tools and animation.
Attribute Edit: Controls the source data for the procedural systems.
Edit: Does low level data layout editing when needed, allowing direct manipulation of the individual elements of the data type.
Other modes related to other parts of the pipeline like weight painting, Grease Pencil draw and Pose are not directly related to the asset creation pipeline, so they won’t be affected by this project.
Not just that workspaces, tool presets and editors can be reorganized for various tasks but that the user has control over this customization for their own tasks. As an example, let’s define workspaces for different uses cases of painting, all based on the Paint Mode:
A hand painting workspace uses the Paint Mode. It contains a viewport with white workbench studio light. The UI shows blend brushes presets, color gradients presets. There should be open UI panels for color palettes and color wheels.
A concept art workspace uses the Paint Mode. It is similar to the hand painting workspace but it contains a 2D viewport and 2D selection tool action presets.
A PBR texture workspace uses the Paint Mode. It contains a viewport with EEVEE enabled in material preview mode. The UI also shows a texture node editor and an asset browser with material presets.
Roadmap
Despite the amount of changes this design introduces, most of the development required to achieve the proposed product already happened (some of them are in master with a different UI, other in separate branches or disabled as experimental). The first step would be to gather and merge all that development into an MVP version. This initial version will have mostly the same features as the current Blender master branch (no functionality nor performance will be lost), just organized in a different way. Hopefully, this new organization and naming will make more clear how the workflow and tools were intentionally designed, so they can be used to their full potential. For example, after the reorganization, the same sculpting functionality will still be available as a subset of features of the Freeform Mode, which now has a much broader scope.
After that initial step, more technical development can happen. This includes things like redesigning the texture projection painting code, refactoring the tool system for tool preset management or building better overlays and snapping for retopology. With this design clear, those tasks can now happen faster as they fit in a well defined big picture design.
It is also important to notice that this design also includes some tasks that require some technical research and innovation, like painting displacement surface details. These tasks have a high risk of taking much longer to develop, but they are not crucial for having a functional version of the asset creation pipeline.
This project also depends on other ongoing development like the asset browser and storage or the upcoming UI workshop. More detailed designs about the final features that involve the asset creation pipeline will be discussed and worked on with those projects.
Developers Julian Eisel and Sybren Stüvel present outcomes of the recently held asset workshop, which started a new phase for the Asset Browser project.
The Asset Browser is one of the main targets for the Blender 3.0 release – and it’s a complex one that has been discussed in various forms over the past decade. That’s reason enough to give it extra attention in the form of a dedicated workshop, which took place 27 May – 1 June. The aim was to work out the future of the asset system for the upcoming Blender 3.0 release and beyond.
A number of documents were created for the workshop, including daily write-ups which give more detailed insights into the discussions.
This is going to be a longer post full of information – the workshop was actually productive! If you are interested in the short term outcomes, check out the list of Blender 3.0 targets in the end of this post.
Status
Blender 3.0 alpha builds already include an experimental version of the Asset Browser. While there is plenty of polishing and fixing to do still, the core functionality is there. To further explore and evaluate the design, the Pose Libraries v2.0 project reworked pose libraries based on the Asset Browser. This is actually being battle tested in a separate branch, in the Blender Studio’s Sprite Fright production.
So there is already a rather basic Asset Browser and a mostly feature complete pose library system. How do we move forward from here? Can we get things done for Blender 3.0? What will that look like? And where do we want to go after that? It’s time to take the Asset Browser project to the next phase.
The asset workshop marks the start of this new phase.
Workshop Goals
At the beginning of the workshop, the workshop goals were defined as:
Get on the same page in regards to the product vision and scope.
Evaluate and improve the design with a number of stakeholders. Get their approval.
Define targets for the 3.0 release.
Discuss technical specifications.
(The term product is used in this post to describe the overall outcome we work towards, which may be done through multiple projects.)
Product Scope
Before digging into designs, it was important to get everybody on the same page in regards to what we are building. We started by brainstorming about the outcomes we expect, or we think others may expect. Out of those expectations we wrote down a number of user stories that we could individually discuss and decide if they should (eventually) be covered by the product or not.
For example, this story should be covered:
As an artist, I can access textures/models/materials from an online store in the asset browser.
On the other hand, this story will intentionally not be covered:
As a production lead, I can approve submitted assets for production-wide use.
Reason: This is a pipeline specific asset management feature that should be handled by the studio’s custom tools. That said, Blender’s asset system will add the framework to get such tools to work (e.g. a studio could use Blender’s asset tags or the custom properties of the asset metadata for this).
The entire list of user stories is a great way to see what to expect from what we are building. It’s highly recommended to check it out, since it’s not discussed in length here.
Asset Navigation & Organization
One of the main topics of the workshop was how asset libraries can be organized and navigated in the Asset Browser. Currently it categorizes assets by data-block type (object, material, image, etc.) only, which has a number of issues:
It doesn’t scale well. Imagine navigating libraries with 100s of objects.
It doesn’t allow good grouping of assets. E.g. scene light collections can not be displayed together with HDRi’s.
Data-block types aren’t granular enough. E.g. node groups can be of multiple sub-types; currently geometry node groups show up under the “Shading” category. Similarly, textures may be created for use as stencils, not just as regular textures.
Catalogs
Instead we decided there needs to be a customizable hierarchy. The directories on your hard drive are one. However, we made a distinction: The way you want to browse assets may not always be how you store assets on disk. We designed a “virtual” hierarchy to manage assets, independent of disk storage, such that assets can be grouped together into catalogs.
This illustrates why you may want to browse assets differently than storing them. The poses from ellie_poses.blend can be split into the catalogs Characters/Ellie/Poses/Head and Characters/Ellie/Poses/Hands. The other way around is useful too: Trees from big_trees.blend and small_trees.blend can be merged into a single Props/Trees catalog.
An asset can be in only one such catalog. One important difference from directories is seen when you select a parent catalog. In a directory on disk you would only see the files in that particular directory. Selecting a catalog with sub-catalogs makes all assets from the entire sub-tree visible. As a concrete example, when you select the Poses catalog above it would show both the head and the hand poses.
Rough mock-up of the Asset Browser, showing both Head and Hand poses when selecting the Poses catalog.
Read More: Managing the Catalogs
Technically the catalogs are fairly simple:
A catalog is just a path (like Characters/Ellie/Poses) with a symbolic ID (like "poses-ellie"). They are stored in a simple text file on disk, in the asset library.
Every asset stores the symbolic ID of its catalog.
Renaming a catalog is thus as simple as changing its path in that text file (which of course Blender will make simple to do in its UI). All assets that refer to that catalog are automatically shown at the new location. This doesn’t even require any changes to any blend files! This has a great advantage: moving assets around in the catalogs does not break library linking. Furthermore, you can split up the assets of a single blend file into various catalogs, without having to worry about what happens with the data in the blend file — it stays where it is.
Moving only a few assets from one catalog to another, and leaving some others behind, that will require changes to the blend files. For this, open the Blend file that contains them, and change which catalogs the assets belong to. As a rule, Blender only works with the file it has open at that moment. There are a number of reasons for this, for example it keeps things like Undo working properly, and you don’t have to be afraid of making mistakes. We realize that this approach has its downsides, and that it can become cumbersome when there is a small number of assets per blend file. After all, you may even have only a single asset per blend file.
Updating a different blend file that is not currently open in Blender to work on, is not allowed for Blender itself. However, add-ons could do this. The advantage of storing only the symbolic ID of a catalog is that moving such an asset only requires a change to that particular property. Similar edits to blend files are made by the Blender Asset Tracer, which can change file paths when files have been moved around, which shows that this is firmly in the realm of possible tech. For a larger scope, potentially useful outside the scope of asset management, a completely new application could be made, tailored for TDs managing relations between blend files in a production.
The downside of storing the catalog definitions in the asset library, as opposed to its Blend files, is that the catalog information may get lost when sharing a single Blend from the library. This could be addressed in various ways. E.g. assets could store the last catalog they were seen in as a hint. This may break easily but should be mostly fine in practice. Alternatively, assets not assigned to a catalog could just go to a default catalog based on the asset type (e.g. stencil textures could go to Textures/Stencils). So even though the exact catalog setup is lost, things stay in a useful state. Either way, there should probably be some way to export a list of assets (possibly a subset of an existing asset library) as new asset library. This would also include the relevant catalog definitions then.
Dynamic Catalogs
It is very useful to filter assets by tags, names, or other characteristics. Having to set them up every time can be annoying. Instead, there should be a way to store filter settings as presets and have fast access to them. We named these filter presets Dynamic Catalogs. These can also be hierarchical, but are merely filter setting based. As a result, an asset can be visible in multiple dynamic catalogs.
Normal (non-dynamic) catalogs can only be stored inside the asset library. They are set up as part of creating an asset library (e.g. by a TD in a studio). Dynamic catalogs can be defined inside the asset library itself as well, but they can also be created by the user of an asset library and stored in their Preferences.
Asset Bundle
The asset bundle already got a little introduction in the Asset Browser Project Update post. During the workshop the topic was thought through some more.
Three things should be clearly separated:
Demo Files: blender.org already provides a number of demo files for showcasing Blender functionality and some great art done with Blender. These have no direct relation to the asset bundles and can stay downloadable as they are now.
Standard Asset Library: A number of assets as rather atomic building blocks to get started with using Blender. These would typically be utilities or starting points, but wouldn’t be used as-is in final renders. For example: modifier node groups (replacing the old modifiers), default brushes, default workspaces, viewport studio lights, matcaps, base meshes, encoding presets, render presets, quick effects, basic utility node groups, …
blender.org Asset Bundle(s): More high level, less atomic assets that may go into the final render as-is. They would help quickly getting started to produce content. For example: Characters, furniture, metal materials, fur materials, lighting setups, HDRIs, rigs, grass particles, textures, … You should be able to dissect some of these assets, to learn something about how Blender works and how you can create such assets yourself.
The standard asset library would come with each Blender download. It would not be possible to modify (edit, move, delete) these assets, they are basically a part of Blender. The blender.org bundle on the other hand might become a separate download, to avoid bloating the regular download size of Blender.
None of these would replace 3rd party asset services/markets. The blender.org asset bundle wouldn’t be a fully fledged asset bundle, just a number of useful assets.
A “3D Printing” template that sets up Blender for simple 3D printing use, including add-ons, the workspaces, and a number of ready-to-print assets.
A “2D Animation Tutorial” template guiding you through the interface, including pre-rigged characters ready to animate (so you don’t have to learn anything about rigging just to get started).
The application templates system will have to be expanded to support mounting a bundled asset library.
Brush Management & The Asset Browser
The way brushes are managed in Blender is in dire need of a complete overhaul. The plan is to tackle this is a separate project, so it is not something to expect in the Blender 3.0 release. Nevertheless the Asset Browser is designed to work as a basis for better brush management. Just like it acts as the basis for the new pose library system.
Currently, brushes are local to each Blend file. Sharing brushes across Blend files or projects means having to append them – not a convenient workflow, to put it mildly. Brushes should be shared across Blender files and projects, similar to the Preferences or Workspaces.
Brushes could simply be stored in an asset library. Maybe Blender should come with a default path for this library, and maybe custom brushes would get stored in this library by default. This would involve “pushing” assets into an external library though, which is a topic for further discussion.
With a good tool and tool preset design, you’d rarely have to modify parameters of a brush. That is because you have a whole number of well designed presets at hand, probably a few hundreds shipping with Blender already. Instead of switching brushes, tools or changing settings, you select some kind of preset asset. Why would you even care what that preset changes exactly (tool, parameters, brush, material, stencil, …) as long as it predictably gives you the result you want? And you need to be able to organize and navigate in these hundreds of brushes efficiently. This is once more where the catalog design really shines. In fact, brushes would live in a 3 layer design:
Asset Library Often 100s of brushes, for all kinds of purposes.
Catalogs Organize the asset library into smaller chunks that make navigating simple & efficient. Dynamic (filter setting based) catalogs can be defined by both the asset library creator and its user.
Brush Picker A UI element (maybe multiple) providing a number of brush “slots”, which brushes can be assigned to for quick access via a popup or shortcut.
Again, this is a project on its own, so this section just scratches the surface.
Terminology
There are a couple of terms that we wanted to refine or reconsider.
Mark Asset, Clear Asset Better terms were discussed (e.g. something like Add to Asset Library), but for a number of reasons none of them seemed appropriate. Decision was to just do a slight change: Use Mark as Asset and Clear Asset for now. Would still be good to find something better.
Asset Library A directory on disk, containing blend files with assets. This will be expanded to include non-blend file assets (like image files), and non-disk locations (like an online service).
Tool / Tool Presets / Brushes The goal is to present artists with tools they can use directly, so the distinction between tools, tool presets, and brushes becomes fuzzy. For some tools, selecting the tool by itself makes sense (like the measure tool), whereas other tools behave wildly different depending on their settings (like the cloth sculpting tool). In the latter case, each of these settings could be considered a “tool” by itself and be presented with its own icon on a tool picker. Or maybe it’s a tool preset then, or a brush? This needs further discussion outside of the asset project.
Asset Types
We differentiate between “primitive” and “preset” assets:
“Primitive” Assets
“Preset” Assets
Import Action
Link/Append
Apply (e.g. temporary load data-block to apply settings)
In the future, the asset type definition will be expanded. Non-blender assets, such as image or audio files, should be supported. For such files, asset metadata is then stored in XMP sidecar files. Importers (USD, glTF, FBX, …) could add support for their file types as assets this way too. Furthermore, it should become possible to enrich an asset with a Python script, which can then provide code to be run when the asset is used.
It could be argued that there are also “generative” assets, e.g. one that allows painting grass onto surfaces, or to add windows and doors to walls (cutting out holes as needed). Technically these would probably be implemented as a preset assets executing custom python code.
Asset Identifiers/References
Internally, we need to be able to reference assets somehow. Multiple assets inside an asset library may have the same name, and they may move around and be renamed. Possibly the only proper way to get this to work perfectly would be some kind of Universally Unique Identifier (UUID). However, UUIDs are quite difficult to implement reliably; for example, duplicating a Blend file shouldn’t cause duplicated UUIDs for the assets inside. A totally reliable UUID system is tricky to get to work and so we decided against it.
Instead the identifiers could be constructed out of the following components:
An asset library identifier.
An asset identifier, which could be just the (asset library relative) file path + data-block name for local assets, or something custom for advanced setups through add-ons.
This could be expressed with Uniform Resource Identifiers (URIs), for example:
It’s not clear yet if we actually need full URIs. For the most part just the asset identifier alone is all that’s needed. But this is roughly the direction we want to go in.
De-duplicating Assets
Dragging in the same material on multiple objects shouldn’t cause the same material to be added over and over again. That is an easy way to bloat your memory usage with duplicated high-resolution textures. It would be good if Blender could somehow recognize that a dragged in asset already exists as data-block in the file and avoid adding it again. It should just reuse the existing data-block.
Getting this to work isn’t that simple. It’s quite related to being able to identify assets reliably, as discussed above. But it doesn’t need to work in 100% of cases. We do have an idea on how it could work.
Read More: How it could work
Roughly, the idea is to do this:
A dragged in asset data-block stores a reference to the original asset.
When dragging in a new asset, we find whether it is already referenced by an existing data-block. If so, we use the RNA diffing created for overrides to compare if the data-blocks are equal (e.g. a material wasn’t changed since it was dragged in from the Asset Browser).
If equal, avoid the duplicate.
If not equal, the data-block doesn’t match its referenced asset anymore, so the reference can be removed.
This would still cause duplicates when:
The source .blend of the asset is modified, even if the asset itself wasn’t touched.
The source file is moved on the hard drive.
The asset library identifier changes somehow.
So it’s not perfect, but should be enough to avoid a lot of data bloat. We can still make it better if needed.
Online Service Migration
Online asset services should be able to migrate to the Asset Browser using Blender’s asset system soon. Or at least they should be able to get a good idea of how they could do it. Check how we generally approach pipeline and asset service integration in the previous Asset Browser Status Update post.
The specifics of the Python API weren’t discussed during the workshop, but the sections above about organization & navigation, asset types and asset identifiers/references will be important for the API as well. Roughly, asset services should be able to do the following:
Register their own asset libraries (not necessarily stored on hard drive).
Define or extend asset catalogs.
Fill or extend (custom) asset libraries with their own assets – that means providing basic data (asset names, type information, previews, metadata) and callbacks for Blender to create an interactive “view” into an asset data-base.
Create custom asset types (e.g. a “door” asset that adds the door geometry and cuts a hole into the wall it’s dragged onto).
Do custom drawing while dragging in assets.
Register custom handlers (e.g. callbacks for when a specific asset type gets saved).
Register their own asset metadata using custom properties (e.g. a bounding box and an up-vector to visualize an object dragged into the 3D View).
Extend the Asset Browser UI with own operators, icons and asset metadata fields.
Register custom asset drop operators for editors.
Perform asynchronous loading of the actual asset data-block.
Maybe: Asset “proxy” loading, so a simplified version of an asset can be displayed while the high-res one is downloaded.
Open Topics
There is of course a lot more to discuss still. Here are some of the more noteworthy topics.
Asset Pushing
The approach that Blender supports natively involves placing the blend file with the asset in an asset library directory, and marking the asset as such. Asset pushing is a different way of getting assets into the asset library, where you’re working on some file and want to copy the asset from that file into the library. This is a concept that appears deceptively simple. In certain cases it is actually simple, but often enough it gets quite complex. Say you want to push an object into an external asset library. Should that also copy the materials? What about the texture images referenced by those materials? What about objects referenced by custom properties? And in which files would they have to go? Do they all go into one big assets.blend, individual Blend files, or into a directory per asset type? Blender should not be making such decisions for you. Apart from all those questions about what should be copied where, there is another limitation. As described above in the Managing Catalogs section, Blender itself is not allowed to write to blend files other than the one you’re currently working on.
For specific cases, these things are all solvable. For example, perhaps the pose library could get some configuration option that says which character’s poses should go into which blend file. Or maybe it gets some hook that other add-ons can hook into to configure this, such that every studio pipeline can manage this to their heart’s content. Furthermore, add-ons can write to other blend files, so they could make the decisions for users. Plenty of possibilities, as you can see.
There are definitely cases where artists don’t care much about how their assets are stored, how dependencies are handled etc. For example when managing brushes: Users just want their brushes to be stored in a way they can access them from anywhere, it’s not the kind of thing you want to spend a lot of effort on managing. Even for general user asset libraries (as opposed to project asset libraries) this may be the case for many users, although here things get more controversial. So this topic is quite important to discuss still, if not to say: the elephant in the room.
Projects
To support bigger setups, like used in the Sprite Fright production, there are ongoing discussions about making Blender aware of projects. This is elaborated on in T73366 Asset Manager Basics. In short, a Blender project is defined by a special configuration file in the project’s top-level directory. This file can then define a project asset library, which can then be used by everybody on the project when working on a file in the project directory. Not only should this aid in collaboration, it should also make it easy to switch between projects and always have the right assets available to you.
Tools, Tool Presets, Brushes
As written above in the Terminology section, Tool / Tool Presets / Brushes are still being discussed. It’s mostly a matter of deciding where exactly the line between “a tool” and “its settings” are drawn, how this is presented in the user interface, and how this relates to the use of the Asset Browser vs. dedicated tool pickers.
Technical Specifications & Python API
During the workshop a lot was talked about technical solutions, as well as how Python could extend the asset system. Things are still quite early though, so there is a lot more we could discuss. Some basics were explained in the previous Asset Browser Project Update post and above’s section about Online Service Integration. More info can be found in the main design task (T87235 Asset System Technical Design) and its sub-tasks.
Further Use Cases
The Asset Browser will have huge impact on much of Blender. Just like pose libraries and brush management may be based on the asset system, there are other use cases for it. For example, better media management for video editing in the Sequencer, a number of non-atomic geometry nodes assets, deep integration into the Grease Pencil workflow, parametric modelling with assets, particle painting with assets, presets for “everything”, … It is way too much for the core asset project team to handle. We rely on other modules to follow the project and come up with their own plans and proposals to make use of the asset system where it makes sense. Of course we have to communicate designs well and welcome their feedback, questions and are ready to help.
Organizing Further Work
So far the asset projects were quite developer driven. For the upcoming work we want to broaden engagement a bit. That means, involving stakeholders more and communicating designs better.
There will be an asset meeting every 2 weeks, open for everybody to join. The exact day & time still needs to be decided, and will be published in the #asset-project channel on Blender Chat.
Blender 3.0 Targets
Finally! Here’s what we want to get into the Blender 3.0 release:
Milestone 1 (Basic, Local Asset Browser) Finished and polished.
Placement tool Makes it possible to adjust assets in the scene, especially after dragging them in from the asset browser.
Catalogs The regular catalogs; the dynamic catalogs are not a target but a “nice to have” for 3.0.
Extending add-menus with assets E.g. extending the Add Node menu with node group assets. For Blender 3.0 this will likely be limited to a number of assets from the standard library. Future versions of Blender should make this extendable by add-ons.
An extended version of the targets is available in the workshop documents. It includes “nice to have” targets and noteworthy parts that are explicitly not release targets. We recommend checking this out as well.
What does the team say?
Last but not least, let’s hear a few words from the workshop team. Why did they join and how do they feel about the outcome?
There are many big projects and changes in motion right now and the asset browser is one of them that is at the very center. For me it was important to join in and understand what the scope and possibilities are and also to give my feedback on how every part of Blender can benefit from this feature, like managing Blender presets (brushes, tools, templates, etc). Since so many voices from different fields were joining in and had similar ambitions I am very happy with the outcome.
Julien Kaspar
The asset browser is the first step towards having a tool management system, required for handling the complexity of the new sculpting and painting tools. Thanks to the workshop it is now clear how to define, store and browse these tools and presets to later build better and more configurable UIs when working with Blender. Having the asset browser will also let us ship Blender with an asset library of preconfigured tools, properly classified and organized using catalogs and dynamic catalogs. This will allow users to discover a lot of existing functionality that would otherwise be almost inaccessible due to highly complex tool configurations.
Pablo Dobarro
The asset browser is the most complex project I have ever worked on, it’s a true challenge. It’s easy to feel overwhelmed, even if I’ve been quite confident in what we were building so far. It was great to finally share ideas and designs with more stakeholders, think the big picture through again as a team, and to finally have concrete plans for 3.0. The project entered a new phase and the workshop was the perfect start for it. Liftoff confirmed!
Julian Eisel
The workshop was a great way to hear all the different perspectives on Blender’s asset system. It’s great that we now have a much clearer picture of where to take it next. I’m proud of the animators having a more powerful & easy to use pose library at the tip of their fingers right now, without having to download a 3rd party add-on. I also trust in the concept of regular & dynamic catalogs, I think it’s one of the important milestones.
The Geometry Nodes project has just finished its third 2-month cycle. This time the target was an improved material pipeline, curves, volumes and designing a system to pass attributes more directly. To help wrap this up and prepare for Blender 3.0, part of the team gathered in Amsterdam for a nodes workshop from June 22 to 25.
Attribute Processor presentation and discussions.
The workshop was organized by Dalai Felinto and Jacques Lucke. They picked the topics to work on, updated its latest designs, and once ready, presented them for further debate to a wider audience (Andy Goralczyk, Brecht Van Lommel, Pablo Vazquez, Simon Thommes, Ton Roosendaal).
The topics covered were:
Simulation Solvers
Geometry Checkpoints
Attribute Sockets
Particle Nodes
Physics Prototype
Simulation Solvers
How to integrate simulation with geometry nodes?
The simulation solvers will be integrated in the pipeline at the Geometry Nodes level. The solver nodes get geometry inputs and outputs, so it can be transformed with further nodes, or even connected to a different solver afterwards. While some solvers can support a single node with inputs and outputs, other solvers will have multiple input and output sockets.
A solver node will point to a separate data-block. The same data-block can be used in different Geometry Nodes trees. This way, multiple geometries can be solved together, each interacting with each other in the same solver (e.g., different pieces of cloth colliding among themselves).
Colliders and force fields will still be defined in the object level. This will allow any object outside the Geometry Nodes system to influence multiple solvers.
Because there is a dependency of solver evaluation depending on how they are connected in the node trees, it is important to display an overview of their relations. In a View Layer level users will be able to inspect (read-only) the dependency between the different solver systems and eventual circular dependencies. Their order is defined in the geometry-node trees though.
View Layer overview of solvers dependencies.
Geometry Checkpoints
How to mix nodes edit mode and more nodes?
Sometimes artists need to use procedural tools, manually tweak the work, and then have more procedural passes. For example, an initial point distribution can be created using Geometry Nodes, then frozen. At that point, the artist could go to edit mode, delete some points, move others around, and then use Geometry Nodes again to instance objects.
For this we introduce the concept of checkpoints, a place (node) in the node tree that can be frozen. The geometry created up to that point is baked and can be edited directly in edit mode. Any nodes leading to the checkpoint cannot be changed without invalidating the manual tweaks.
For simulation systems, like hair, this is also desirable. Distribute hair, comb it, add more complexity via nodes, simulate them and add more nodes. Even if the simulated data cannot be edited, freezing the simulation allows for performance improvements. You can simulate one frame, freeze, and still see how the rest of the tree will work animated. Without having the re-evaluate the simulation for every frame.
A live simulation or a baked cache are interchangeable. A node that reads from a cache (VDB, Alembic, etc.) should behave the same way as a solver node. This overall concept was only briefly presented and still need to be explored further and formally approved.
Attribute Sockets
How to pass attributes directly from one node to another?
At the moment each geometry node operates on the entire geometry. A node gets an entire geometry in, and outputs an entire geometry. There is no granular way to operate on attributes directly, detached from the entire geometry. This leads to an extremely linear node tree, which is hard to read.
The initial proposal to address this was called the attribute processor. This was discussed but it faced a few problems:
No topology change is possible within the attribute processor.
Having a different context inside geometry node group adds a level of complexity.
There is no way to pass attributes around different sets of geometries.
The attribute processor would only work on existing geometry, it couldn’t be used to create new geometry.
As an alternative, the team tried to find ways to handle the attributes as data directly in the node group level. Besides this, there was a pending design issue to be addressed: data abstraction from inside and outside the node groups. In other words, how to make sure that an asset is created and shared for re-usability and all the data it requires is still available once this is used by another object.
This proved to be more tricky than expected. And it is been discussed as a follow up from the workshop. You can check the current proposals here:
Update: There is a design proposal being formalized at the moment to address those problems.
Particles Nodes
Many existing solvers can work as a black box that receives geometry, extra settings, reads the current view layer colliders and force fields. The particles system however is a box full of surprises nodes instead.
Nothing definitive was proposed, but the 2020 design was revisited see if it would be compatible with the proposed integration with geometry nodes.
Example of particle simulation data-block – the rules / nodes are added by the artists.
Particle Simulations are solvers but they have their behavior defined by nodes.
There are three different node types: Events, Condition Gates and Actions:
Events – On Init, On Collision, On Age
Condition Gates – If attribute > x, If colliding, And, Or, …
Actions – Set Gravity, Keep Distance, Set Forces, Kill Particles.
Similar to Geometry-Nodes, each Action node can either:
Add or remove a row to the spreadsheet (i.e., create new points).
Add or remove a column to the spreadsheet (i.e., create a new attribute).
Edit a cell/row/column (i.e., set particle position).
Physics Prototype
As part of the Blender 3.0 architecture the simulation system will work independently from the animation playback. For this the physics clock needs to be detached from the animation clock.
To address this we will have a prototype to test how this can work both in the UI and the back-end. The scope of this was not covered during the workshop.
In late June a Nodes workshop took place at the Blender HQ in Amsterdam. The goal of that workshop was to map out the future of the Geometry Nodes and simulation systems. Besides formalizing some design decisions for the projects, a lot was done in regard to the solvers design. However there was still a big inconclusive topic: better attributes workflow.
Now in August it was time for another round of meetings to land this home.
Ton Roosendaal helping to find a story and design to explain the Fields concept.
The Geometry Nodes system has already been used in a few productions. Nonetheless, what is now in Blender (2.93) has clear shortcomings. The main one is that artists should be able operate directly on attributes. How to do that, however, was not clear.
After the June workshop the Geometry Nodes team worked on different proposals, and opened a discussion with the community:
To move the discussion forward two different designs were chosen:
Fields gives the most flexibility when building the node-tree. Its concept is rather abstract, but very similar to the existing shader node-trees.
Expandable Geometry Socket has a simpler metaphor. There is a single data flow in the entire node-tree (either geometry or a list of values).
A week of development was reserved for each design to build a working prototype. They were available to the community for testing and ready on time for another round of face-to-face workshops in early August. In the end the Fields was picked as the best option to move forward.
Data Flow and Function Flow
Fields introduces two distinct flows in the node tree: a data flow and a function flow. This design is closer to the underlying software design. That design is then exposed to the users in a true “form follow function” fashion.
Different flows in the same node tree
Data Flow
The data flow goes from left to right and operates directly in the geometry (usually a geometry is passed to the node, which outputs a new modified geometry).
Geometry data flow: from left to right.
Function Flow
The function flow is a sequence of function nodes that are passed to the geometry nodes and are read backwards, as a callback.
Function flow: callbacks and attribute references.
Prototype and Example
The complexity of understanding how the different flows work is overcame when the system is used hands-on. Artists familiar with the shader system may see similarities with the fields and the shader node-trees.
Prototype demoing extrusion with Fields – video by Simon Thommes
The prototype didn’t manage to cover all the upcoming design changes (e.g., to more easily distinguish between the different flows). But it helped to sell the idea of operations directly in the attributes, and the flexibility of working with fields. Artists don’t need to worry about attribute domains (e.g., converting a face attribute to a vertex attribute), and are free the change the topology of the geometry while the function flow still works as expected.
Geometry Nodes Fields doughnut using Fields – image by Pablo Vazquez
There are still design topics that need to be addressed, but the main story of the system was better defined and reiterated over during the workshop.
Definitions
Geometry sockets/noodles: Data transport + operation order (geometry nodes).
Attribute and field sockerts/noodles: Functions “programming” (function / field nodes).
Names may change when this moves to the final documentation. But the underlying definitions are valid already.
Fields
A function with variables (such as attribute names).
Evaluates on a geometry.
Evaluates to data.
Can be combined into new Fields.
Attribute
Geometry data stored per element (vertex, edge, …).
Can be accessed as Fields.
Are propagated through Geometry nodes.
Types:
Dynamic: always available global names (position, material_index, …).
Static: user data or generated (UVs, selection, …).
Geometry Node
A node that operates on a geometry.
Can accept Fields as inputs (evaluates input Fields on the node’s geometry).
Can create attributes which are outputted as Fields (generated Static attributes).
Can edit/write only to Built-in attributes.
Outputs a new geometry.
Function Node
Can accept Fields as inputs.
Has no Geometry input.
Doesn’t evaluate Fields inputs (since it has no geometry).
Outputs Fields.
Next Steps
There are still a lot of open design questions on how to integrate Fields into Blender:
A new socket design for Fields.
Function and Geometry nodes should look different.
Visual distinction between the data transport and the functions flow.
Clear relation between an attribute and the geometries that can access it.
Spreadsheet editor to show Built-in, Propagated, and Static attributes (through a Viewer node).
On top of the designs, there is work to be done for backward compatibility. This is all scheduled for Blender 3.0 and will be the focus of the Geometry Nodes module. That means the core developers and the community will prioritize this over adding new nodes/features.
After that, the focus will shift to expand the Fields design for future node systems.
Try it Yourself
Download the experimental build (temp-geometry-nodes-fields-prototype) and the doughnut file to get a feeling of the new system. Remember that experimental builds should not be used in production as they might be unstable or corrupt your data.
The 2021 Google Summer of Code projects have just wrapped up, concluding ten weeks of Blender development over eight projects helmed by students and supervised by members of the Blender development team. This is the first post in a four-part series exploring what the students have achieved during this year’s development stint.
Episode One: Geometry Codes
The first part of this series will look at GSoC projects dealing with one of the liveliest and youngest sections of Blender: Geometry Nodes. Two projects tackled the module, with Fabian Schempp diving head-first into the development fray and porting a slew of modifiers into their nodal form, and Himanshi Kalra providing support to this breakneck development pace by developing a framework for regression testing of Geometry Nodes.
Porting popular modifiers to Geometry Nodes –Fabian Schempp
Familiar grounds
Fabian Schempp is no stranger to the Blender codebase: He had already been contributing patches to the Geometry Nodes Project since the end of 2020, a familiarity that must have come in handy during his GSoC tenure, which saw him complete and submit patches for nine new modifier-turned-nodes. Fabian was mentored by Jacques Lucke and Hans Goudey.
Show me the nodes
Here is a quick rundown of the 9 added nodes.
Remesh Blocks
Based on the Remesh modifier.
Merge by Distance
Based on the Weld modifier.
Unsubdivide
Based on the Decimate modifier.
Dissolve
Based on the Decimate modifier.
Voxel Remesh
Based on the Remesh modifier.
Collapse
Based on the Decimate modifier.
Mesh Inset
Based on the Mesh Inset operator.
Mesh Extrude
Based on the Mesh Extrude operator.
Solidify
Based on the Solidify modifier.
Seven of these nine new nodes are a port of the “Solidify”, “Weld”, “Remesh”, and “Decimate” modifiers. Some modifiers (such as the “Remesh” modifier) have been split into multiple nodes to fit into Geometry Nodes’ modular design philosophy. Fabian also ported the “Inset” and “Extrude” operators, with the latter’s power being on full display on the “Attributes and Fields” blog post. Fabian’s final report contains images and examples of what every single one of these is capable of.
What comes next
Fabian plans to continue working on these nodes until they are ready to be merged to master, a step that would probably include porting them to work with the up-and-coming “fields” revision of Geometry Nodes.
Regression Testing of Geometry Nodes –Himanshi Kalra
One more time, with feeling
Next up is Himanshi Kalra, who is not only familiar with the Blender codebase, but is also a GSoC Veteran: She had already successfully worked on regression testing during the 2020 edition of GSoC.This year Himanshi returned, flanked by mentors Habib Gahbiche and Jacques Lucke, to develop a regression testing framework for the rapidly growing Geometry Nodes.
Testing, 1, 2, merged
At the time of writing this, Himanshi successfully submitted and merged four patches, with the main new feature being the extension of the framework to test .blend files directly without going through the Blender API to add tests. This process is explained in more detail by Himanshi on the wiki. While its usefulness and importance within the growing size of the Blender project is beyond question, the development-oriented nature of this GSoC project makes it less flashy and visually compelling than its brethren. So here is a screenshot of all the nodes with implemented tests at the time of writing, to satisfy your Blender-loving irises:
Himanshi developed quite the testable roster over her ten week stint
Join us next week for the second episode of the GSoC Roundup, where we will take a look at improvements made to the VSE and UV Editor!
Find out more on these two projects on the links below:
Developer Hans Goudey shares the results of the geometry nodes curve project, and future goals.
Adding a procedural curve editing pipeline in 3.0 has been a large focus for the geometry nodes team and community for the past few months. One of the last larger changes for 3.0, supporting the geometry nodes modifier on curve objects, was committed a few days ago. So I thought it would be a good idea to give an update on the project. Where are we, and what does the future hold?
Just like curve objects, three curve types are supported in geometry nodes:
Bezier With control points handles, and handle types, each editable as separate attributes
Poly A simple series of connected points
NURBS A more advanced spline type with different controls exposed
The current built-in attributes on curve data
The New Nodes
One of the fundamental ideas of the changes is that it should be clearer when data is a curve and when it is a mesh. Currently Blender doesn’t distinguish very well between the two, because it implicitly changes a curve into a mesh. But we can take advantage of the flexibility of nodes to make the process much more visible.
That flexibility means that now any object that supports geometry nodes can output curve data, not just curve objects!
Curve to Mesh
This node is the counterpart of the existing “Curve Bevel”, except that the node can accept multiple input profiles, and automatically marks sharp edges.
Curve Fill
The fill node creates a 2D mesh from curves, filling the inside with faces. It was contributed by Erik Abrahamsson, building on the triangulation library built by Howard Trickey for exact boolean. It’s the counterpart to the existing 2D curve fill in 2.93, but it should give better, more predictable results.
Curve Trim
The curve trim and fill nodes used on curve splines from text characters
The trim node makes curve splines shorter, with a length or factor input. The node was contributed by Angus Stanton.
Curve to Points
This node generates a point cloud with evenly spaced points on the curve, generating the data needed for instancing like rotation.
Curve Endpoints
Also contributed by Angus Stanton, this node is a simpler counterpart to the curve to points node that only outputs the start and end of each spline.
Resample Curve
Drawing clouds
This node outputs a new curve, sampling consistent lengths from the input. This is just like the curve to points node, except the output is still a curve.
Curve Subdivide
Like the resample curve node, this node makes a higher resolution curve, but it keeps the original spline type, and doesn’t change the shape at all for Bezier curves. Each segment can have its own number of cuts.
Mesh to Curve
Blender hasn’t had the ability to create a curve from mesh edges procedurally before. Now it does! With a selection input, any of a mesh’s edges can be turned into curve splines, and any extra attributes will be automatically transferred.
Handle Type Nodes
This node procedurally change the left or right handle types of Bezier control points.
Set Spline Type
Contributed by developer Johnny Matthews, this node changes each spline’s type between the Bezier, poly, and NURBS types mentioned above, just like the edit mode operator.
Suddenly the star primitive gets much more interesting!
Curve Reverse
Also created by Johnny, this is a simple node that switches the direction of a curve, just like in edit mode.
Curve Length
Another simple node to output a curve’s total length. Created by Johnny as well, who shared it on his YouTube channel.
Primitives
The defaults for each curve primitive node
Just like the existing mesh primitive nodes, there are now curve primitive nodes, again by Johnny Matthews. These provide a surprisingly useful and flexible base for both 2D and 3D contexts. Many of the nodes have options for different input methods, like finding the circle that passes through three points in space.
The Internals
Curve support in geometry nodes has also provided an opportunity to significantly improve the internals of the curve code. A large portion of the existing curve code in Blender dated back to the it’s initial open source commit, 19 years ago! While that’s not a problem in itself, standard good practices of software development have changed a lot in that time, and it was often confusing and hard to work with. Existing curve data in Blender also didn’t support storing generic attributes, which is essential for geometry nodes.
With the aim of replacing most of the existing code, we created a new implementation for the basics of curves, designed to be more efficient, much easier work with, and properly documented. Already this has been worth it; with an improved foundation we could make progress much more quickly than I expected!
Conclusions
Personally I’ve been interested in curves since I started working on Blender two years ago, so I’ve been happy to take on such a specific area and push it forward for a while. Five months actually– I’ve been very invested in the project, so time has really flown by!
I want to thank the community of developers contributing to this project who made it a joy to work on, and Jacques Lucke for the consistent advice.
The Future
What we have now should already be a solid base of features for 3.0, but more is planned for the near future:
Curve Fillet Bevel-like behavior in a node from Dilith Jayakody’s2021 GSoC project.
Connect Splines A node to join multiple sections of a curve into one.
Curve Parameter A field input node, to output how far along the curve each point is.
More complete attribute support: Transfer attributes in the curve to mesh node, expose the handle position attributes.
Curve sampling with any arbitrary factor.
Convert more of the features from curve objects into nodes.
And much more! Now that the project is mostly wrapping up, the developer community will be essential in adding new features.
Further in the future, these changes can be applied to curve edit mode as well. Editing and painting custom attributes will be particularly important.
The 2021 Google Summer of Code projects have just wrapped up, concluding ten weeks of exciting Blender development over eight projects helmed by students and supervised by members of the Blender development team. This is the second post in a four-part series exploring what the students have achieved during this year’s development stint.
Episode Two: Editorial Endeavours
After taking a look at the Geometry Nodes-related projects in the first part of this series, today’s post will go through the two projects tackling two editors nestled in the confines of Blender, frequented by intrepid editors and texture artists: The UV Editor and the Video Sequencer Editor (VSE). Aditya Jeppu spearheaded the project aiming to implement individual strip previews in the VSE, while Siddhartha Jejurkar took on the task of improving Blender’s UV Editor.
Video Sequence Editor strip previews and modification indicators
Camera, Lights, Previews
Aditya Jeppu’s GSoC project proposal initially aimed to deliver two new features to Blender’s VSE: strip previews and modification indicators. However, after consultation with mentor Richard Antalík, Aditya ended up focusing on getting the strip previews fully functional and polished enough to increase the chance of the patch making it to master; and that’s exactly what the student delivered:
Aditya Jeppu’s patch instantly increases the VSE’s usability by dynamically generating previews for loaded clips.
The once monochromatic VSE strips are now brimming with life, and most importantly: They are full of useful visual information.
What comes next
While this patch still requires some work to deal with some lingering flickering artifacts, Aditya’s eyes are firmly fixed on the future possibilities this feature opens, including a potential extension of the preview mechanism to scene strips.
UV Editor Improvements
Unwrapping your head around it
Seeing how the Blender community rejoices at the mere mention of UV Editor updates, Siddhartha Jejurkar’s GSoC project is bound to make a few texture artists sleep better(and for a few hours more) at night.
Under the eye of his mentor Campbell Barton and with additional guidance from Daniel Bystedt, Siddhartha managed to implement all planned improvements in his initial proposal and then some. While Siddhartha extensively documents the functionality of his newly implemented features on his final report, here is a quick taste of what this student has achieved over the summer:
Pack Islands to Box Area
It is now possible to define an area where to pack UVs. Learn more.
Siddhartha’s new modal operator “Pack Islands to Area” in action
Snapping Improvements
Two new snapping types have been added. Learn more.
Snapping improvements including Increment and Absolute Grid Snap
New grid types for the UV Editor
The grid has been revamped as well for customization and visualization. Learn more.
Join us next week for the third and penultimate episode of the GSoC Roundup, where we will take a look at improvements made to the Knife Tool and Curves!
Find out more on these two projects on the links below:
The 2021 Google Summer of Code projects have just wrapped up, concluding ten weeks of Blender development over eight projects helmed by students and supervised by members of the Blender development team. This is the third post in a four-part series exploring what the students have achieved during this year’s development stint.
Episode Two: Ahead of the Curve, On the Cutting Edge
While the second post of the series explored the GSoC projects dealing with the VSE and the UV Editors, this third and penultimate post will go through the two projects aiming to improve a fundamental element of any DCC software: Modelling. More specifically, these projects tackle two areas and tools that are at the foundation of many modeling workflows. Dilith Jayakody was at the head of the Curve Improvements project, while Cian Jinks was responsible for the Knife Tool Improvements project.
Curve Improvements
Straight Shooter
Under the supervision of project mentors Hans Goudey and Falk David, Dilith Jayakody aimed to improve the manipulation and creation of Blender curves, an area which the student had already contributed to before the beginning of the summer. As per the project proposal, Dilith ended up implementing two major new features at the end of his GSoC tenure:
Curve Fillet Node
Initially planned to be a regular tool, Dilith opted to implement this feature as a Geometry Nodes node at the recommendation of mentor Hans Goudey, this implementation having the same functionality but benefiting from(and to) the procedural breadth of the system. The goal of this node is to be able to create and control bevel-like rounding of curves at control points. Learn More.
The Curve FIllet node’s different options in action
The goal of the pen tool is to overhaul the curve editing and creation process, by concentrating old and new curve editing functionality and shortcuts into a single tool. Learn More.
The Curve Pen tool makes the creation and editing of curves a breeze
What comes next
Aside from working on getting these two patches merged with master, Dilith will be looking at implementing further improvements to the Curve Pen tool, including snapping support, vertex slide, and more.
Knife Tool Improvements
Sharp as a Knife
The brainchild of student Cian Jinks, the Knife Improvements project holds the distinction of being fully merged with master, meaning that all the features discussed below are already available in the latest Blender 3.0 daily builds! Under the supervision of mentor Howard Trickey, Cian aimed to implement a plethora of features upgrading the Knife Tool’s usability and power. While the student’s final report and patch description hold all the details on the newly implemented features, here is a rundown of some of the standouts:
Visible Distance and Angle Measurements
Pressing S will enable measurements, and pressing S repeatedly will cycle between the three modes: Only Distance, Only Angles, Both. Learn more.
Cian’s Knife Tool improvements include the ability to show measurements and angles between cut segments and mesh edges.
Constrained Angle Mode Improvements
Added setting to control increments, ability to enter snapping angle increment via number keys, and a new local constrained angle mode. Learn more.
The new local constrained angle mode in action
Multi-Object Edit Mode
The Knife tool can now cut through multiple objects in edit mode. Learn more.
Takes the local orientation of the object the cut segment was started on.
Undo Improvements
Pressing Ctrl-Z undoes the previous cut segment and re-adjusts starting point for the current cut segment. Learn More.
The knife tool now supports expected undo behavior.
Snapping to Global and Local Orientation
Press X, Y, Z, to constraint to an axis. Press the axis key again to toggle between Local/Global orientation. Respects scene orientation setting if set, allowing for custom orientations. Learn More.
Join us next week for the last episode of the GSoC Roundup, where we will take a look at the final two projects tackling Blender physics!
Find out more on these two projects on the links below:
The 2021 Google Summer of Code projects have just wrapped up, concluding ten weeks of Blender development over eight projects helmed by students and supervised by members of the Blender development team. This is the fourth post in a four-part series exploring what the students have achieved during this year’s development stint.
Episode Four: Let’s Get Physical
The summer is firmly in the rearview mirror now, meaning the 2021 GSoC overview series is coming to an end. While the third post in the series looked at the Curve and Knife tool projects, this fourth and final post will go through the two GSoC projects dealing with the realm of physics and simulations, closing out this tetralogy with an (adaptively simulated and accurately visualized) bang. More specifically, we will be looking at the “Adaptive Cloth Simulator” project by Ish Bosamiya and the “Display simulation data for rigid bodies and cloth” project by Soumya Pochiraju.
Adaptive Cloth Simulator
You Can Keep Your Hat On
First up is the “Adaptive Cloth Simulator” project by Ish Bosamiya, with mentors Sebastian Parborg and Sebastián Barschkis. This was not Ish’s first GSoC rodeo, with the student participating in the 2019 edition, laying the groundwork for this year’s project by also working on adaptive cloth simulation. However, the initial plan of simply refactoring the 2019 code and pushing through with the project was dropped when Ish realized how far the general Blender code-base had evolved since then. This led Ish to completely rewrite the project from the ground up, laying up the groundwork for a potential merge with master.
As a result, Ish came up with a new mesh structure and implemented changes to the cloth modifier to become a non-deform modifier storing the previous frame, all efforts to implement new static and dynamic remeshing algorithms.
Static Remeshing
A new static remeshing implementation by Ish. Learn More.
Ish’s Mesh Analyzer Tool in action debugging static remeshing
Display Simulation Data for Rigid Bodies and Cloth
Show Me the Magic
Soumya Pochiraju ‘s project aiming at providing visual debugging tools that would be a great addition for physics work in Blender was also mentored by the two Sebastians. Over the summer, Soumya managed to implement displays for all the data mentioned in the initial proposal save for cloth springs, on account of time. This leaves the project with an impressive roster of new visualizations, which we will go through below.
Velocity, Acceleration, and Forces
Velocity, acceleration and forces acting on active rigid bodies are displayed in the form of color-coded vectors. Normal, effector and gravitational forces are in pink, frictional forces are in yellow and the resultant is in cyan. Learn more.
Normal, frictional, and net forces on a cube, displayed via Soumya’s implementation.
Collisions
The frame of the collision shapes lights up if an object collided with something within the past timestep. Learn more.
Colliding Boxes
Constraints
In collaboration with Sebastian Parborg, three ways were devised to visualize constraints: Slider, Hinge, and Piston. Learn more.
Keep in mind that all GSoC projects are developed in separate branches, meaning that all these features have not necessarily been merged with master. The ultimate landing (or not) of these patches lies in their overall quality and in the GSoC students’ motivation to pursue development and maintenance of their work beyond the GSoC weeks.
This is it for this year’s overview series. It’s been a pleasure covering what has proved to be an exceptional GSoC year, with a majority of the projects hitting their proposed goals and landing in master.
Find out more on these two projects on the links below:
In August of 2000, Blender 2.0 was released at the SIGGRAPH show. That means that the Blender 2 series has been running for a little over two decades! We certainly don’t want the 3 series to take that long. Starting with Blender 3.0 a new version numbering convention will be used, with a major release planned each two years. According to the new planning, in the coming two years eight minor Blender 3.x releases will be made, of which two of these as Long Time Support (LTS) versions.
This article aims to provide an outline for the Blender 3.x roadmap. It has been reviewed with several of the core contributors already. Feedback is very welcome as usual, this will be a living document for a while.
Blender 3.0 Beta
General focus: building on 2.8 UI and workflow, and improving it further.
The general guideline will be to keep Blender functionally compatible with 2.8x and later. Existing workflows or usability habits shouldn’t be broken without good reasons – with general agreement and clearly communicated in advance.
Before 3.0 gets released, all module teams will check on reviewing ongoing implementation and workflows, and come with design docs with requirements, outlining what we accept to be revised, and what we stay away from now.
The module teams should make clear which changes will be happening, what the user benefits are, how we will keep compatibility of previously saved work, and (last but not least) how to get involved as a contributor.
Most areas in Blender are quite stable, but in some areas bigger changes are being expected: for physics, ‘everything nodes’, sculpting/painting, texturing, and character rigging. None of these changes will be violating the roadmap as outlined for 2.8 though.
Core
The core module will be empowered to manage code standards and engineering practices everywhere in the Blender code more strictly (please write docs and provide tests!). Ongoing improvements of architecture and code will continue, aiming at better modularity and performances.
Anything that affects core Blender functionality such as ID management, Blender files, DNA data design, Python API, undo, dependency graph, overrides and APIs in general is meant to get good specs and functional docs, for contributors to know how to use it efficiently. No commit to this module will be allowed without review of Core Module owners.
Python scripts and Add-ons
The Python module is committed to keep the API work and compatible for all of the 3.x series. Some breaking changes to the API are inevitable, these will always be communicated at a minimum of 6 weeks before a release will happen at the Python release log page. The biggest change in one of the 3.x releases is that BGL will be entirely deprecated and replaced by the GPU module.
Modeling
Modeling tools in Blender will be maintained and keep working compatible. Speedup for managing massive datasets (large scenes or large models) remains a key topic for more development.
Sculpting / Painting
Currently a proposal for a hybrid sculpting/painting workflow is under review. This would eliminate the need for multires, and introduce a novel approach to combine traditional (triangle offset) sculpting with shader-based texture displacement. The benefit would be to achieve extreme detailed resolution, without need for massive polygon datasets, memory use and giant files.
Related to the proposal is to make Blender’s current editmodes atomic and flexible, allowing tool designers to combine multiple editmodes together for more efficient workflows.
Final decisions on the proposal and editmodes are pending a design and review workshop with contributors, which is tba.
Texturing
Blender’s procedural texturing system is in urgent need to be upgraded. Modern workflows offer node-based procedural textures that can be layered to do something similar to image textures – or better. In Blender we can do this by fully integrated support for these textures inside Eevee, Cycles, viewport drawing and painting tools.
Design is an open topic still.
Animation Tools
Character rigging is also in need of a serious upgrade. The ‘animation 2020’ project was postponed due to circumstances, but in ‘22 it should come back on the agenda.
Keywords are: node based, layering, debugging tools, speed, crowds, mocap support, automated systems for muscles and physics.
Blender Institute will lead this project by involving industry veterans for design and implementation. The new rigging system is expected to use similar concepts for bones or poses, but it is not expected to be feature compatible. For that reason we expect old rigs and new rigs to co-exist for a while.
UI/UIX
The UI team will be reviewing the 2.8 workflow (toolbars, shortcuts, layouts), and come with a design doc to provide focus for developers and guidelines for everyone to understand in what way to contribute to sensible high quality and consistent UIs in Blender.
The UI module is currently in need of senior UIX engineers and designers. Until these roles have been assigned, UI module team members will mostly assist the other modules in keeping their UIs and workflow well balanced and consistent.
Assets
Asset browsing and managing is one of the big new 3.0 features. Work on this area in Blender will continue for a while. The expectation is that good asset tools in Blender will help configuring efficient workflows; for expert artists in productions and beginners alike. Part of the Asset and UI project is to publish (a series of) Blender Bundles; relatively large downloads (1+ GB) with presets, primitives and asset libraries.
Blender 101 (Application templates)
Being able to configure Blender for personal workflows is a key Blender target, part of the 2.8 project.
To bring configuring even one step further, .blend files can be used to replace the standard startup file – creating ‘custom versions’ of Blender this way. These templates will allow to completely configure UI layouts, define custom keymaps and enable add-ons to run at startup. A smart Python scripter can design complete new applications this way, using vanilla Blender releases. As prototypes the team will release a couple of simple templates: a video-player review tool and the 100% dumbed down ‘Monkey Blender’.
A template used this way will be called “Blender Application Template” or just “Blender App”. Planning is to have this working as beta for Blender 3.1, after extensive testing and reviews by contributors and module teams.
Cycles
Cycles is now 10 years old, and had a big rewrite in the last 6 months, the Cycles X project. This included an essential architecture redesign, leading to significantly improved GPU rendering performance and interactivity in Blender 3.0. Following Blender release will build on this, adding new Cycles production features and improving handling of complex scenes.
Eevee
Our high-quality real-time 3D rendering engine is being upgraded as well. The new design brings screen space global illumination, more efficient shading, improved volume rendering, panoramic cameras, and tight grease pencil integration. It also paves the way for GPU ray-tracing through Vulkan.
A real-time compositing system is also planned, bringing compositing nodes into the 3D viewport. The new Eevee will be designed to efficiently output render passes and layers for interactive compositing, and other renderers will be able to plug into this system too.
Compositing
As promised at the 2019 Blender Conference, compositing is getting attention from developers again. The main goal is to keep things stable and solve performance bottlenecks. New is support for a stable coordinate system using a canvas principle.
VSE
The Blender Video editor is getting more developer attention, especially to improve the UI and usability. Noteworthy are the new preview images in strips. The VSE module will spend time on getting the roadmap and requirements clear, to enable the community to align with generally agreed design principles for VSE to contribute efficiently.
Viewport
The 3D viewport in Blender is where the action is; the 2.8 strategy to move more tools to 3D (not as buttons or panels, but by carefully designed 3D widgets or facegroups) will be ongoing.
A big target for the 3.x series is to move Blender to entirely use the Vulkan open standard. Vulkan and Metal backends for Blender’s GPU API are being developed. We expect these to be ready to replace the OpenGL backend by the end of 2022.
CPU/GPU support
Blender now is being supported by all major silicon manufacturers: Intel, AMD, Nvidia and Apple. The developers closely work with the hardware industry to ensure an equal and fully compatible experience for artists who use Blender, disregarding the platform or operating system. Everyone can monitor or submit performance reports on Blender’s Open Data website, which will be updated when 3.0 is out.
AR/VR
Blender supports a real-time stereo viewport, for using the industry standard OpenXR library to communicate with VR devices. Support for VR controllers has been added for 3.0. A possible project for 3.x is to add real-time camera tracking for AR and virtual sets.
Everything Nodes
The Geometry Nodes of 2.9x are a big success, and can be seen as proof of concept for ‘everything nodes’ in Blender. Next on the planning is to introduce solvers as nodes, add point-based nodes (particles) and nodes for physics simulations.
Hair systems are a separate topic; here simulation is secondary to artistic control and tools. Specs and design for a node based hair system is on the todo. Object level (animation) nodes is an open topic as well.
Physics
Blender 3.x will keep using the old physics systems such as for Bullet, and modifiers for Mantaflow, Softbody and Cloth. An OpenVDB modifier will be added. For the rest, new development will go to Node based Solvers. See previous point.
Real-time mode
Physics simulation and VR viewing already require a new concept in Blender – “time”. Continuous passing of time can be treated independently of frames output for rendering or animation. Blender needs a formal design concept for this; enabling all tools and options for Blender to always work, whether playing in real-time or still. Design challenges are related to animation playback, dependencies, caching and optimization. Ideally the real-time viewport could be used for prototyping and designing real-time environments or experiences. Design topic: a node based editor for events, logic and states.
Grease Pencil
Gpencil projects for 3.x are: storyboarding workflow using Sequencer strips editing, better support for Eevee, asset browser support. With Gpencil being used by the industry a lot, also for final rendering, we expect increasing development attention to this area.
NPR
Line art will get further performance tweaks and features. While speed is a major focus, also generating lines of higher quality will be on the roadmap.
Production pipelines and IO
Blender will remain committed to other industry standards such as OpenVDB, OpenEXR, OpenTimeline, OpenColorIO, OpenSubdiv, and OpenShadingLanguage. Support for MaterialX is on the list to be discussed.
Blender 3.x will see further integration with USD, also to enable hybrid pipelines. More (design and code) contributions are going to be needed by studios from the industry.
Work on a complete and 100% free/open source pipeline will continue as well, coordinated by the team in Blender Studio. Challenge for the coming years remains to ensure that individuals and small teams can create & manage complex industry grade 3D media projects using Blender and other FLOSS tools.
Blender and internet
A core principle of Blender is that it does not require the internet to work. The Blender developers remain strongly committed to providing a complete offline user experience. Optionally, however, Blender should be able to connect to the web for additional features to work. For example: signaling a user of new releases, updating add-ons, browsing asset repositories, sharing data with others or set-up collaborative environments.
Blender will start a new module for this (name to be defined, ‘meta’ is getting too many different meanings!). Module ownership and membership will be managed openly and accessible, making sure that only things will be added with general consensus that directly benefit users.
A dedicated website and server on blender.org will be set up for proof of concepts and rolling out online features. By principle and by design only free & open services for everyone will be added here (only requiring an account to login). That means that possible commercial add-ons (like for storage, versioning, libraries) will need to be provided by external and independent vendors. The module can provide APIs for it, and help keep this safe and stable.
After 3.0 is out, Blender Foundation will set up this project at blender.org to start initial design discussions and to recruit contributor members.
Research
As a free and open source project we also need to keep looking for topics that will define the future of 3D tools. We especially welcome contributions from research labs or universities in this area.
A big topic we need to tackle once is AI tools – useful ‘assistants’ to speedup repetitive tasks in the creation process.
Mission
In the last year Blender worked on defining a clear mission statement. It’s what drives the blender.org projects and keeps the focus:
We make the world’s best 3D technology available as open source tools for artists.
Or in short: we defend and further The Freedom to Create.
Exciting times are ahead!
Thanks for reading,
Amsterdam October 2021 Ton Roosendaal, Blender Foundation chairman.
Blender 3.0 takes support for AMD GPUs to the next level. With improved AMD GPU rendering support in Cycles. Beta available now!
By: Brian Savery, November 11, 2021
We have some exciting developments to share about AMD graphics card support. Blender 3.0 was announced with some rewrites to the rendering engine Cycles (AKA Cycles X). This removed OpenCL support for rendering on AMD GPUs for technical and performance reasons. To help address this, AMD has been working very closely with Blender to improve support for GPU rendering in Blender using the AMD HIP API, to ensure users of AMD graphics cards can take advantage of all the enhancements found in Cycles X.
For AMD GPUs, this will be in Blender 3.0, expected in December 2021.
Previous versions of Cycles, Blender’s physically-based path tracer, supported rendering via the OpenCL framework. OpenCL is a C-based programming language that allows running programs on many GPUs which support it. However, moving forward our partners at Blender were hoping to merge the separate OpenCL code with the C++ CPU and CUDA rendering code. In short, with Cycles X, they were looking for a way to compile a single codebase that could be used on all the different devices Cycles can render on, including AMD graphics cards.
Luckily, AMD has an open-source solution for developers just for that. HIP (Heterogeneous-computing Interface for Portability) is a C++ Runtime API and kernel language that allows developers to create portable applications for AMD and NVIDIA GPUs from a single source code. This allows the Blender Cycles developers to write one set of rendering kernels and run them across multiple devices. The other advantage is that the tools with HIP allow easy migration from existing CUDA code to something more generic. AMD has been working closely with Blender to add support for HIP devices in Blender 3.0, and this code already available in the latest daily Blender 3.0 beta release.
AMD Support for the Blender 3.0 Beta
To use this with a supported AMD graphics card an updated AMD Radeon Software driver is needed. Today we are releasing a Windows beta driver for users who wish to test out our support for Cycles X in the latest Blender 3.0 Beta release. As this is a beta release of both our driver and Blender 3.0, our support for Cycles X is currently in preview. We will have more to share about our support in December when Blender 3.0 is expected to be launched. (Linux support is expected Q1 2022)
We love the Blender community and are excited to better support it for rendering on AMD GPUs. In addition to our continuing support of the Blender Development Fund, we will continue contributing code to Blender development to benefit users and improve their workflows and experiences. Stay tuned, as there is more to come!
Compatibility: Cycles X in Blender 3.0 is enabled on AMD RDNA architecture graphics cards and up, and support has been validated by AMD on the following desktop graphics cards:
AMD Radeon PRO W6800
AMD Radeon 6900 XT
AMD Radeon 6800 XT
AMD Radeon 6800
AMD Radeon 6700 XT
AMD Radeon 6600 XT
AMD Radeon 6600
Brian Savery
Brian Savery is a Professional Graphics Software Development Manager for AMD. AMD, the AMD Arrow logo, Radeon, RDNA, and combinations thereof are trademarks of Advanced Micro Devices, Inc.
The rendering improvements from the Cycles X project will be in the upcoming Blender 3.0 release. Since the announcement, developers have been working to complete and stabilize the code, as well add new features and improve performance.
Well give a quick overview of recent developments.
GPU Performance
GPU rendering performance has been further improved. Here’s where we stand compared to 2.93.
Render time on an NVIDIA Quadro RTX A6000 with OptiX
At the time of the initial announcement there was no volume rendering support. Since then we have restored volume rendering, and found that GPU rendering performance improved 3-5x in various volume scenes.
Hair and Shadow Improvements
While most benchmark scenes were rendering faster with Cycles X, a few involving many layers of transparent hair were showing performance regressions compared to 2.93.
One issue we found is that in GPU rendering, if only a small subset of the whole image is slow to render (like a character’s hair) then GPU occupancy would be low. This was improved by making the algorithm to estimate the number of samples to render in one batch smarter. Previously we’d end up rendering 1 sample at a time. Now we detect low GPU occupancy and adaptively increase the number of samples to batch together, which then increases occupancy.
Another part of the solution was to change the shadow kernel scheduling. Previously, continuing to the next bounce would have to wait for all light and shadows to be resolved at the previous bounce. Now this is decoupled, and shadow tracing work for many bounces can be accumulated in a queue. This then gives a bigger number of shadow rays to trace at once, improving GPU occupancy. This matters especially when only a small amount of pixels are going 64 bounces deep into transparent hair, as in the Spring scene.
Further, we found that transparency in hair is usually quite simple, either a fixed value or a simple gradient to fade out from the root to the tip. Instead of evaluating the shader for every shadow intersection, we now bake transparency at hair curve keys and simply interpolate them. Render results are identical in all scenes we tested. Below are two sample images to compare the results.
Koro rendered with Blender 2.93 (116 secs @ 500 samples)
Koro rendered with Blender 3.0 and the optimizations (52 secs @ 500 samples)
For the statistics enthusiast, here are some memory and timing results for a few well known scenes, so that you can see the results for the transparent hair baking and the shadowing optimizations (ref is the reference without any optimizations).
Shadow Optimization Results
Distance Scrambling aka Micro-Jittering
Sobol & Progressive Multi-Jitter (PMJ) can now use distance scrambling (or micro-jittering) to improve GPU rendering performance by increasing the correlation between pixels. There is also an adaptive scrambling option to automatically choose a scrambling distance value. These are available in the advanced settings in the render properties.
Plain PMJ (80 secs @ 1024 samples)
PMJ with scrambling distance 0 (65 secs @ 1024 samples)
Plain Sobol (80 secs @ 1024 samples)
Sobol with scrambling distance 0 (66 secs @ 1024 samples)
To render the above images the scrambling distance was set to zero to maximize the correlation between pixels. This should not be used in practice and was only done in order to make it easier to see the correlation introduced by the micro-jittering (notice the girls shoulder in the images above to the right). In a real setting you would generally have a larger distance to hide these artifacts. This technique can result in less noisy images and in some cases improved performance in the range of 1% to 5% depending on your rendering setup (it’s only beneficial for GPU rendering). Below are some performance results using the adaptive scrambling distance which currently does not work so well for CUDA due to the tile sizes. Work is currently underway to choose better tile sizes for CUDA which should result in better performance.
Ambient occlusion did not take into account transparency in the initial version of Cycles X. We now restored this, taking advantage of the shadow kernel improvements that also helped with hair.
Also, additive ambient occlusion (AO) support is now available through the Fast GI settings. Additionally, a new option has been added to “Add” the AO result as well as the “Replace” operation that was available already. Below are a few images to compare the results.
We improved denoising for volumes. Previously these were mostly excluded from the albedo and normal passes used by denoisers. While there is not exact equivalent to albedo and normals on surfaces, we make an estimate. This can significantly help the denoiser to denoise volume detail.
The denoising depth pass has also been restored, which was previously removed along with NLM.
AMD HIP
We’ve worked with AMD to bring back AMD GPU rendering support. This is based on the HIP platform. In Blender 3.0, it is planned to be supported on Windows with RDNA and RDNA2 generation discrete graphics card. It includes Radeon RX 5000 and RX 6000 series GPUs.
We are working with AMD to add support for Linux and investigate earlier generation graphics cards, for the Blender 3.1 release. While we would have liked to support more in 3.0, HIP for GPU producing rendering is still very new.
However we think it is the right choice going forward. It lets us share the same GPU rendering kernels and features with CUDA and OptiX, whereas previously the OpenCL implementation was always lagging behind and had more limitations and bugs.
To test the HIP release you need to get the Blender 3.1 alpha and also to download the latest AMD drivers (See this blog post for more information.)
Now that the new Cycles X architecture is in place, we expect that adding various new production features will be easier. This will start in 3.1 and continue through the 3.x series.
Facebook (now Meta) joined the Blender development fund during 2020 with the main purpose of supporting the development of Cycles. A team from Facebook Reality Labs led by Feng Xie and Christophe Hery are interested in using Cycles as a renderer for some of their projects. They chose Cycles because it is a full featured production renderer; however, they are also interested in improving Cycles’s real time rendering capabilities and features for high quality digital human rendering.
There are three main areas of collaboration:
Scene update optimization: reduce synchronization cost between the scene data and their on-device copies to enable real-time persistent animation rendering.
Add native procedural API and Alembic procedural to accelerate baked geometry animation loading and real-time playback.
Better BSDFs models for skin and eye rendering. In particular, we want to support the anisotropic BSSRDF model important for skin rendering, and accelerate the convergence of caustic effects important for eye rendering (planned).
In order to support as many platforms as possible, Cycles stores the data of the whole scene in unified, specific buffers. For example, all UVs of all objects are stored together in a single contiguous buffer in memory, similarly for triangles or shader graphs. A major problem with this approach is that if anything changes, the buffers are then destroyed before being recreated, which involves copying all the data back, first into the buffers, then secondly to the devices. These copies and data transfers can be extremely time consuming.
Data for all geometries are packed into unified, specific buffers before being copied to the device.When any geometry is modified, all data is invalidated, repacked, and copied to the device.
The solution, proposed by Feng, was to preserve the buffers, as long as the number of objects or their topologies remain unchanged, but to copy into the buffers only the data of the objects that have changed. For example, if only the positions of a mesh have changed, only its positions are copied to the global buffer, while no other data is copied. Previously, all data (UVs, attributes, etc.) had to be copied. In the same way, if only a single mesh changes in the scene, only it will have its data copied: no other copy of any other data of any other object will be made.
Only the specific data for the modified geometry is invalidated and copied. However, the entire buffer still needs to be copied to the device.
Change detection is done through a new API for the nodes of which the Cycles scene is composed (not to be confused with the shading nodes). While in the past client applications could have direct access to the nodes’ sockets, these are now encapsulated and have specific flags to indicate whether their value comes from a modification. These flags are set when setting a new value in the socket if it differs from the previous one. So, for each socket, we can now automatically detect what has changed when synchronizing data with Blender or any other application. Having this information per socket is crucial to avoid doing unnecessary work, and this with a greater granularity than Cycles could do until now.
Another optimization was the addition of BVH refit for OptiX: BVHs of objects are kept in memory and only reconstructed if the topology of the object changes. Otherwise, the BVH is simply modified to take into account the new vertex positions.
All in all, these few changes reduced the memory consumption of Cycles, and made it possible to divide the synchronization time by a factor of 2 on average. Benchmark results using FRL’s Trudy 1.0 character are shared below.
Procedural System
Another new feature was the addition of a procedural system. Procedurals are nodes that can generate other nodes in the scene just before rendering. Such a mechanism is common in other production-ready render engines.
Alembic was chosen for the concrete implementation of the first Cycles procedural. Alembic is widely used by 3D content creation systems to exchange baked geometry animations (including skin and hair deformations). In the future, we may explore developing a native USD procedural.
The Alembic procedural allows the loading of Alembic archives natively, avoiding to go through Blender and its RNA, which is slow to traverse and convert. The procedural is accessible via Blender, and when active, Blender will not load data from the Alembic archive, saving precious time. The procedural also has its own memory cache to preload data from disk to limit costly I/O operations. With this procedural we can load data much faster than from Blender, up to an order of magnitude faster.
Alembic Procedural Performance
Let’s take a look at a case study using Reality Labs’ digital character Trudy. Digital character Trudy consists of 54 meshes with 340K triangles. Rendering Trudy in Blender 3.0 using Cycles’ native alembic Procedural increases the framerate by 12x, from 2 to 24 frames per second. A detailed breakdown of the acceleration for each computation stage is shown in the table below. (Courtesy from Feng Xie, Reality Labs Research)
Statistics: 54 objects, 340K tris, 770MB textures
Better BSDFs
The team at Facebook is also interested in improving the physical correctness of Cycles BSDFs. For example, a recent contribution saw the rendering of subsurface scattering improved via the addition of an anisotropic phase function to better simulate the interaction between light and skin (tip: human skin has an anisotropy of about 0.8). This contribution was made by Christophe Hery, the co-author of the original Pixar publication introducing this improvement to the rendering community.
Future Collaborations
Although this collaboration was previously focused on synchronization speed, it is now focused on physical correctness and path tracing speed. A patch for manifold next event estimation, which will improve the rendering of caustics, will be soon submitted for review and inclusion. There are also several optimizations still possible in the work to improve the performance of Cycles’ data transfer and memory usage (especially VRAM).
The Blender Institute (BI) crew has two main tasks; the responsibility for maintaining the blender.org facilities and to initiate high priority strategic development projects – especially the ones that take long-term focus by experienced developers.
A strategic project usually involves complex architecture or design, for which working prototypes need to be developed. Once things have proven to work reliably, verified in public on blender.org, strategic BI projects are handed over to the modules and developer community.
Last year the emphasis was on projects that multiple developers could work together on. As a result, in 2021 the BI team delivered:
Asset Browser
Cycles overhaul (known as Cycles X)
Geometry Nodes
Library Overrides
For the year of 2022 the BI team will further integrate this work in Blender and get community involvement in the modules.
Here are the strategic topics on the 2022 agenda:
Application Templates
Overrides
Physics
Texturing
Application Templates
Blender already supports Application Templates in an experimental state. However, users can’t easily and properly create their own stand-alone experiences powered by Blender. The authoring pipeline features, deployment and future-proof/compatibility still needs to be figured out. A few application examples may be developed as use cases.
Overrides
The collection and object proxy system was fully replaced by Library Overrides in Blender 3.0. The new system exposed some long-standing complexity problems in Blender. This affected particularly the resync performance. The goal of this project is to wrap up the overrides project, assess the future of the override pipeline, address related data management issues and to hand it over to the module.
Physics
The next milestone of the Geometry Nodes project is to support physics dynamics. The first step is to handle interactive physics simulation, and to find a solver that can work in real-time. This can support both point based particles as well as hair nodes, to completely replace the old simulation system.
Texturing
The combination of node-based textures and mask painting will help towards a better non-destructive painting pipeline within Blender. Brushes management and performance are related topics that may be visited as well.
Next Steps
During January and February a series of workshops will take place at the Blender HQ to better define those projects, their expected outcomes, audience and possible solutions. With this on hand we can start to think in terms of teams, milestones and priorities.
When it was decided that Blender would follow the VFX Reference Platform, there were a couple of assumptions made:
It will help adopting Blender in a big studio environment.
It will facilitate the industry to contribute to Blender.
During the past two years there was little evidence of any of those outcomes. Contrary to that, for us it was perceived as a limitation with users lacking benefits such as the latest Python version in Blender.
We are aware that not following the VFX reference platform generates friction for developing add-ons and integration in studios pipelines. We’d love to hear of such situations. We’d be happy to collect useful information for studios on blender.org for it and welcome their contributions.
With that in mind, the decision was made that Blender will no longer stick to the VFX Reference Platform.
For the Blender community and eco-system as a whole it is more important to be able to use the latest Python versions and other libraries. We are still committed to preserve file format compatibility and to avoid library conflicts for integrations with binary add-ons.
This decision was made considering the arguments presented in the bf-committers list, besides a meeting with the Blender admins and Foundation chairman Ton Roosendaal. Thanks everyone for pitching in.
As part of the four 2022 Blender strategic targets, a workshop to help refine the Overrides topic was held during the week of the 17th of January at the Blender HQ in Amsterdam. The general idea behind overrides is to enable artists to edit and animate their assets, while still keeping them in sync with the library files content.
The current Library Overrides (as in Blender 3.0/3.1) were reviewed, and several possible developments, both in Library Overrides and in new areas, were investigated. Some indirectly related topics were also discussed.
This blog post is a summary of the topics investigated during the workshop week. Some are very well defined and can be implemented immediately, others will require further design work, and only a subset of those tasks and ideas will be addressed this year.
In the original library file the data-blocks are local and can be edited directly.
The different datablocks are related by dependencies. In this example the collection depends on an object that it contains, which depends on a mesh.
In the set file the data-blocks are linked from the library file.
Data-blocks cannot just be edited and are bound by their dependencies to each other.
To change the data of the object a user override is created (e.g. to move the object around). To be able to create this override, however, the collection that depends on this object needs to be overridden as well.
As this is only necessary to access the object inside, this is applied as a system override. The mesh data-block is not overridden in this case, as it does not depend on any overridden data.
The overridden data from the set file is then linked into the shot file. In this file none of the datablocks are overridden, as they are directly linked. However, the object that hase been changed with an override in the set file does keep the changes it received in the set file.
In the shot file the data is then overridden again, now the mesh data is changed (e.g. to activate a shapekey). To be able to access the mesh data, system overrides are created for the object and the collection and a user override is created for the mesh.
The changes that were made in the set file are still intact and the changes that are made locally are added on top.
Performance of Library Overrides is generally good, except when the linked library data is modified. This often requires a resync of the overrides using the modified data, and this process can take a lot of time in the current Blender (3.0), depending on the complexity of the file.
This is a matter of optimizations and implementing known pending TODO’s in the current codebase. It is already being worked on, Blender 3.1 will ship with some improvements in that regard, and much more speed-up is expected in the coming weeks.
Blender 3.0
07:20
Blender 3.1 Beta
01:40
Blender 3.2 Alpha
00:11
Resync performance on a production file from Sprite Fright by Blender Studio
Library overrides are not only about single, isolated data-blocks. In almost all use cases, a group of data-blocks need to be overridden together.
For example, in a character, there will be overrides of its root collection and sub-collections, the rig object(s), and most of the geometry objects. Those groups are called “override hierarchies”, as there is typically a single “root” (the main collection of a character for example), and a tree of dependencies.
Keeping this hierarchy information in a clean state is crucial, especially for the resync process required when the linked data changes. The current Blender (3.0) is not doing a great job at that. Furthermore, the current implementation does not properly support relationships between different hierarchies (like parenting a library override of a character to another one). There are several ways to improve this situation.
There are many minor tweaks to be done in existing operators to reduce the likeliness of wrong operations that will break override hierarchies.
For example, in the Outliner, it should not be possible to easily delete the root collection of a library override, while deleting its whole hierarchy should be allowed.
“System” library overrides are data-blocks (and their relationships) that are created purely for technical reasons under the hood, and are not supposed to be editable by the user. A good example of this is the tree of collections in the override of a character: they are required for the system to work, but they are not needed from the user’s perspective.
By preventing modifications of those system overrides (by default), and not showing them so prominently in the user interface (for example with a filter toggled off by default), both robustness and usability of the system will be increased.
The Outliner view of Library Overrides is currently very limited. It needs improvements on both the visualization and the management tools aspect.
Besides reducing the visibility of “system overrides” as discussed in the previous section, the Override view of the Outliner should be divided in two modes, one dedicated to the visualization and handling of override hierarchies, and the other as a flat view listing all overrides with their overridden properties. These properties should be editable when possible, to become the one place to see and edit all overridden properties at once.
USD (Universal Scene Description) was not discussed during the workshop, but it is definitely an important topic to keep in mind. Last year, in February, Jeroen Bakker did a short experiment on mapping USD and Blender overrides, which was fairly successful. This can be further investigated in collaboration with the team implementing USD support in Blender.
An alternative way of using Library Overrides is by letting the user specify what can be overridden in the source library.
The main goal of the restrictive workflow is to encourage collaboration between the artists producing an asset, and the artists using this asset.
How to achieve this goal is still under discussion. One of the main challenges of restrictive overrides is to provide an easy and comprehensive user interface and experience.
While being able to edit materials, shading and lighting of linked data is an important feature, it appears that most of the use cases could also be covered using Dynamic Overrides. The main case that requires Library Overrides is the re-assignment of materials.
Furthermore, supporting this in Library Overrides requires a way to select and expose the key parameters of a material, since allowing the editing of all nodes’ parameters in an override of a node tree is not a realistic scenario due to its complexity.
Being able to tweak the shape of a character is an important feature for animators.
The natural solution would be to support adding new Shape Keys in Library Overrides. However, Shape Keys have several issues currently in Blender, on both usability and architectural levels which would need to be addressed first, before adding yet another layer of complexity.
Alternative solutions also need to be investigated, like using Geometry Nodes or sculpting tools.
Adding new data layers to overrides of meshes would open the doors for flexible and powerful pipelines. For example, an asset creation pipeline could be split into different steps, each using overrides from the previous step while adding new extra data layers.
This example could go like so:
A modeler creates the geometry.
A rigger adds new vertex groups.
A texture artist adds new UV maps and Vertex Color layers.
Supporting this feature will require quite some technical and design work.
While generally based on the same idea, Dynamic Overrides are a fairly different approach than Library Overrides. Here is a list of the key differences:
Library Overrides
Work only on linked data
Allows modifying ID relationships
Allows adding complex entities like Modifiers or Constraints
Are handled on file save/load
Are stored in the ID they affect
Directly linkable
Dynamic Overrides
Work on any data, linked or local
No ID relationships changes
Can only modify simple ‘value’ properties
Are handled as part of evaluation
Are stored in a different ID
Linkable only if the ID owning them is linked
Conceptually, Dynamic Overrides are also quite similar to animation data, the main difference being that they are stored in a different data-block than the one they affect.
Being evaluated as part of the depsgraph, after the handling of animation, also implies that Dynamic Overrides:
Can be animated, but…
Will supersede any animations or drivers defined on the properties they affect.
Dynamic Overrides will be stored in Scene data-blocks, on two levels:
A global level, applying the overrides to all view layers in the scene.
A per-view layer level, replacing the global one when defined.
A Dynamic Override contains the data-block it affects, an RNA path leading to the affected property in that data-block, and the value of the override.
Finding out how critical it is to be able to link or export/import those dynamic overrides, and how this could be done, needs to be further investigated.
Another idea to explore is to find out how much Library Overrides and Dynamic Overrides could be presented as a single override interface to the user, at least on a high level of usability.
While the Outliner provides a way to visualize and manage data in .blend files, its tree nature has some limitations. Several years ago, in the 2.3x/2.4x series, Blender used to have a graph representation of its data-blocks and their relationships, called the Oops (Object Oriented Programming Schema).
Oops editor in Blender 2.3/2.4
The need for this type of visualization is still there, although in very complex files a simple Oops (or DOT graph) quickly becomes confusing and hardly useful.
An idea could be to implement a dedicated view/tree type in the Node Editor, where each node would be a data-block. The sockets and edges would then be their relationships.
To keep a better control over visualization, node groups could be used to represent some level of hierarchy (e.g. top node group for a scene, sub-node groups for collections, etc.). This could help both keep the graph cleaner, and help identify unexpected connections between data-blocks from unrelated groups.
Such a visualization/management tool would help to manage Library Overrides.
While file loading speed became a big issue with Library Overrides, even without the resync process, loading a full production file can take several tens of seconds.
Making this process non-blocking by showing any available data as fast as possible, would improve the feeling of responsiveness and reduce user frustration.
This is not a trivial task, it has several technical challenges touching many areas of the codebase.
Design and development tasks are coordinated in the Overrides project on developer.blender.org.
Performance and reliability topics have already been investigated at the end of last year, and are being worked on. Great improvements on that front are expected in Blender 3.1 (to be released early March), and even more in Blender 3.2 (June).
In February, further planning will be done with the development team as well as some design work on the remaining targets. The usability topic could be the target of a first development sprint, including tools and UI improvements. Later this year a second sprint could cover the initial dynamic overrides implementation.
Blender’s sculpt tools made a lot of progress in various development branches in 2021. The goal is to begin merging these improvements into the main Blender branch over the next several releases.
For a while now Blender has had an experimental feature to paint vertex colors in Sculpt mode. This has many advantages like being able to use mask, face sets, filters, new brush settings and of course smooth performance, same as sculpt mode (up to dozens of millions of faces). There have been some final improvements to release the feature properly.
A performance comparison between the old vertex paint mode and vertex painting in sculpt mode (1.5 million faces)
Initially, it used its own special type of vertex colors that wasn’t compatible with the previous system. These two have now been combined into a single list of “Color Attributes”. These attributes can be created on vertices and face corners, and can be floating-point numbers or bytes.
The vertex colors panel has been replaced with color attributes.
In addition to regular development, a new feature has been added: cavity masking. Cavity masking weights the paint stroke by how concave (or convex) the geometry is. This was used to make the white streaks along the edges in the flower example below.
The white streaks at the edges were made using cavity masking.
Edge boundaries will be handled for Dyntopo and all of the smooth/relax brushes. All kinds of boundaries are kept intact: face set boundaries, topological boundaries, seam edges, sharp edges, etc.
Boundaries such as Face Sets are kept intact when smoothing them.
Topology Rake Improvements
When using Dynamic Topology the triangulated meshes will follow the surface information much more accurately. In addition to making triangles flow around any boundaries (Open boundaries, Face Sets, Seams) Topology Rake also has a new “curvature” mode. If no edges are nearby to guide the triangle flow, topology rake can use the principle curvature directions of the mesh instead.
Using Dyntopo will not just preserve Face Sets, but also orient the topology along the boundaries & surface curvature.
Sculpting while previewing EEVEE will be fully supported, including the fast PBVH drawing. At first this will only be supported for the regular Sculpt mode with Dyntopo (multires already uses PBVH drawing in master, it’s just missing texture drawing).
Sculpting while in EEVEE preview – Demo file by Daniel Bystedt
The next step is to merge the Dyntopo implementation from the sculpt-dev branch into Blender along with the various cleanups and refactorings that were done to the core Sculpt API.
This new version of Dyntopo is much faster, supports custom attribute interpolation (Vertex Color, UVs, etc), just-in-time triangulation (it no longer triangulates the entire mesh), preservation of all types of boundaries – face set boundaries, UV island boundaries, seam/sharp edges, topological boundaries, and more.
Sculpting with Dyntopo while preserving Face Sets, UVs, Vertex Colors, and other mesh attributes.
There are many other improvements and features in progress. Multires sculpting and brush management are key development goals.
The improvements to painting that happened in the Sculpt mode vertex painting will also be followed up by improvements to the existing Texture Paint mode and expanding the Vertex Paint mode into a more general “Attribute Paint” mode.
Texturing is one of the strategic targets for 2022. The first part of the project involves upgrading 3D texture painting performance, quality and tools, and is planned to be worked on in the coming months.
The second part of the project is a new procedural and layered system to create textures, aimed at PBR shading. That part of the project will start later this year, and we are sharing the initial design now for feedback from the community.
The design revolves around a redesigned Texture datablock. This datablock contains texture nodes and a list of channels it outputs.
A typical set of channels for PBR shading are base color, roughness, metallic and normal map. The system is not limited to that, and arbitrary outputs are possible for different types of BSDFs or use in other contexts like brushes and geometry nodes.
The texture properties for a texture datablock show a texture layer stack. Procedural textures layers can be dropped in from the asset browser, and new image or attribute layers can be created to be hand painted.
Panels in the texture properties editor: layer stack, properties of the selected layer and modifier, and list of texture channels.
Layers work similar to typical 2D image editing applications. Blend modes, masks, reordering, hiding, merging, and modifiers would be supported. Selecting an image texture or color attribute would enable painting on it in the 3D viewport.
A difference is that each layer consists of all the channels defined for the texture (or a subset). Blending, masks and modifiers simultaneously affect all channels in a layer.
The texture layer stack corresponds to a node graph, and the node editor provides an alternative view to edit the same texture. This can be used to create more complex node setups that can’t be expressed in a stack.
Texture nodes corresponding to the layers in the mockup above.
The new texture nodes are mostly shared with shader nodes. New additions are layering nodes, and nodes for effects that are not practical at render time.
The set of available nodes would be:
Common nodes like Math, Mix, Image Texture, Noise Texture
Shader nodes like Geometry, Texture Coordinate, Attribute
Texture specific nodes like Blur, Filter
Layer node to bundle multiple channels together into a layer
Layer Stack node to combine layers
Texture node to link in an existing texture datablock asset
There is a new layer socket type that combines multiple channels. Nodes like Mix and Blur work on multiple socket types, including layer sockets where they perform the same operation on all channels.
Using texture channels in a material is done by adding a Texture Channels node that outputs all the channels of a texture, and linking those channels to the corresponding Principled BSDF inputs. Most of the time such node setups are automatically setup as part of a default material.
Shader nodes linking texture channels to a BSDF.
The Texture Channels node has the following settings:
Node inputs, created in texture nodes with a Group Input node (similar to geometry nodes). This makes textures customizable with parameters, attribute names or image textures.
Option to evaluate the texture channels Procedural or Baked.
If Baked, a link to one Image datablock that channels are baked to.
The Image datablock would be extended to contain a set of texture channels. With multilayer OpenEXR files this comes naturally, all texture channels are saved in a single file similar to render passes. For other file formats there is a different filename for each texture channel, or multiple texture channels are packed together into RGBA channels of fewer files, as is common for games.
When showing an Image datablock in the image editor, a menu lets you flip through the texture channels, exactly like render layers and passes.
Baking multiple meshes or materials into a single image becomes straightforward. Link the same Image datablock to the Texture Channels node in all materials, and Blender automatically bakes everything together, assuming the UV maps are non-overlapping.
Texture datablocks are available in the asset browser for easy dropping into the texture layer stack. Blender would ship with a library of procedural textures that can be used for quickly creating materials.
One complexity in the design is that texture datablocks are aimed at multiple use cases, including different types of materials, sculpt or texture brushes, geometry nodes, or compositing. For this we need some method of filtering textures for the current task.
In the context of a PBR workflow, materials and textures are almost the same thing. In such cases having both a list of material and texture assets seems redundant. However in Blender we need to accommodate more workflows, and so we likely can’t hide this distinction from users.
This is an ambitious design that affects many areas of Blender. We welcome feedback and ideas to improve it. There is a topic for design discussion on devtalk.
A few questions we are thinking about:
Is making a single texture datablock for many uses a good idea? Or does it make the workflow too fuzzy? Is there a better alternative? If not, how would filtering relevant texture assets work exactly?
Are there better ways to integrate textures into materials than through a node? Is it worth having a type of material that uses a texture datablock directly and has no nodes at all, and potentially even embeds the texture nodes so there is no separate datablock?
With this system, the users can now do the same thing both in texture and shader nodes. How does a user decide to pick one or the other, is there anything that can be done to guide this?
What is a good baking workflow in a scene with potentially many objects with many texture layers? In a dedicated texturing application there is a clear export step, but how can we make a good workflow in Blender where users must know to bake textures before moving on to the next object?
Some textures can remain procedural while others must be baked to work at all. How can we communicate this well to users? Is it a matter of having a Procedural / Baked switch on textures that can be manually controlled, or is there more to it?
How do we define what a modifier is in the layer stack? Conceptually this can just be any texture node that has a layer input and output, but how do we find a set of such nodes and node groups that work well in a higher level UI? Possibly some asset metadata on node groups?
Can we improve on the name of the Texture datablock or the new planned nodes?