Your comments

I agree - this is also the majority of the ways I've seen them used in projects. While it would be really nice if artists used them to share common settings/values and enforce certain consistencies across a project, I've never seen that actually happen in a live project with a significant number of people on it.

Some nodes I used to have in my shader graph that SF doesn't support, that I would add, include:


GradientNoise 1D/2D/3D

WorleyNoise 1D/2D/3D

FBM functions

ParallaxOcclusionMapping (and it's various variants)

Flow Mapping

SDF functions

Curve/Gradient nodes

Masking nodes (isolate signals, etc, via simple nodes instead of math)

etc..


For me, nodes in nodes and the code node have always been ways to add large functions which are not available in SF by default and are too unwieldy to work with as a unique part of each graph that uses them. Our old shader graph didn't have nested node support (nor were the artist crying for it)- but it was very easy (for coders) to add nodes to, and our artist would just ask show us a graph when they wanted it encapsulated into a node and we'd do it for them.

So, I don't use shader forge very often anymore, so take it with a grain of salt. Anyway, my opinion is that not allowing nested nodes to nest is likely ok, but annoying as it means anyone creating a nested node is going to have to know that nested nodes are second class citizens and need to be handled specially. Multiple output support is nice for things like parallax occlusion mapping, where you can supply another output for shadow calculations and expect that the compiler will optimize it out when not in use. Not having properties seems ok, as long as the input properties can still be optimized to static values by the compiler.


Personally, I'd personally prefer to approach this from a higher level; nested nodes are just a way to create a function library - in most cases, that library gets shared and used over and over, and doesn't change that often. While you might have some truly unique, project specific nodes, the number is likely pretty small. Thus the recompile issue is not huge IMO. Additionally, any system which allows us to add custom nodes would potentially solve the same issue. The code node is currently cumbersome, not easily re-usable, and has many limitations. I don't personally care if the way to add new nodes is via the node structure or another technique, and while I suspect some of the community would be sad about another solution, for most users it's just a way to get more nodes that encapsulate common tricks. If an API could be exposed that would allow people to add nodes in another way (c#/CG files, etc) and had less restrictions, then that would be preferable.


No worries Joachim, it'll all get fixed up eventually. 

Tony: I recently went through the same challenge with the normals. What I ended up doing was modifying the TBN in the vertex shader to account for it. I tried several other techniques first, but this seemed the most elegant and least instructions in the end (targeting sm2.0 for some of this). Here's the code, hope it helps:

// Sides that get sampled upside-down (-X, -Y, -Z) need to have their normal map's Y flipped.
// Positive sides end up with +1, while negatives end up with -1.

float lowest = min(min(v.normal.x, v.normal.y), v.normal.z);

// For top and bottom, the tangent points to the right. In all other cases it points straight down.
float3 tangent = float3(abs(v.normal.y), -max(abs(v.normal.x), abs(v.normal.z)), 0.0);
v.tangent = float4(normalize(tangent), floor(lowest) * 2.0 + 1.0);
o.normalDir = mul(half4(v.normal,0), _World2Object).xyz;
o.tangentDir = normalize( mul( _Object2World, half4( v.tangent.xyz, 0.0 ) ).xyz );
o.binormalDir = normalize(cross(o.normalDir, o.tangentDir) * v.tangent.w);
That won't handle rotations. You'd need to use a matrix to transform it correctly, but the whole idea of using anything for this is silly, since v.vertex simply needs to be passed as a texcoord to the pixel shader and no actual calculations are needed. Doing an extra matrix mul for every pixel is extremely wasteful for something that's already easily available in the vertex shader. I've pretty much gone back to writing shaders by hand because of these types of issues. 
So, I recently spent a lot of time porting a shader forge shader to code, and making it run on sm2 platforms. It was 135 pixel instructions, so getting that into 64 was pretty tricky. Anyway, I learned a bit about the shader compiler and it's error reporting in that process. Often, when a shader fails on a platform, it will fail silently and just draw black. This usually happens when your shader passes the compilers checks (number of interpolators, number of instructions, etc) but for some reason still has an issue on the device or in the emulator. It's pretty annoying, because you get no information about it, and can only figure out what is happening by commenting out sections of your code until you come to the offending piece.

In my case, one was simply sampling another texture seemed to put me over the line (but again, no compiler warnings/errors). What was odd about this was that I was running in SM3, and only had 5 texture samplers other than the offending one. However, swapping that out for a simple color fixed the problem. 

Oh, and annoyingly, the fallback shader is not always invoked in this case. 

Anyway, this isn't really a shader forge issue - though I suspect it will come up again in the context of SF. Given the complexity of Unity's shader compiler (cross compiling/translating, many platforms, etc), it's also not surprising that there are unfortunate dead ends like this.

Ok, I've converted over to hand written shaders for this. I actually needed to send over some additional data in a TEXCOORD from the vertex shader, so simply having the original model-space vertex position wouldn't have been enough to fully solve the issues I was having. That said, I still want to be able to access this one day..


 
Hey Joachim,
  Is this something you can likely put in over the next few weeks? If not, I'm going to have to turn a few massive shaders I'm working on into regular code - which I can do - but it would be a drag. Thanks!
No, it's assuming LDR. Back when I was writing lots of shader nodes, I did a pretty comprehensive comparison of RGB->HSV->RGB conversions, and this was the fastest I could find. There's an alternate version on ShaderToy which IQuillez came up with that's worth considering as well; the main difference being that it produces a more continuous hue spectrum than the official math does.

https://www.shadertoy.com/view/MsS3Wc
 
Nope - when I do that it it still acts as if it's in world space.  

Even if that did work, it wouldn't be sufficient, in that it would be using the world space of the vertex after vertex offset. Having the original, local vertex position allows you to work with things in the original model space, free from scale, position, rotation, and vertex offsets, without trying to undo all of those transformations in the shader.