Your comments
Some nodes I used to have in my shader graph that SF doesn't support, that I would add, include:
GradientNoise 1D/2D/3D
WorleyNoise 1D/2D/3D
FBM functions
ParallaxOcclusionMapping (and it's various variants)
Flow Mapping
SDF functions
Curve/Gradient nodes
Masking nodes (isolate signals, etc, via simple nodes instead of math)
etc..
For me, nodes in nodes and the code node have always been ways to add large functions which are not available in SF by default and are too unwieldy to work with as a unique part of each graph that uses them. Our old shader graph didn't have nested node support (nor were the artist crying for it)- but it was very easy (for coders) to add nodes to, and our artist would just ask show us a graph when they wanted it encapsulated into a node and we'd do it for them.
So, I don't use shader forge very often anymore, so take it with a grain of salt. Anyway, my opinion is that not allowing nested nodes to nest is likely ok, but annoying as it means anyone creating a nested node is going to have to know that nested nodes are second class citizens and need to be handled specially. Multiple output support is nice for things like parallax occlusion mapping, where you can supply another output for shadow calculations and expect that the compiler will optimize it out when not in use. Not having properties seems ok, as long as the input properties can still be optimized to static values by the compiler.
Personally, I'd personally prefer to approach this from a higher level; nested nodes are just a way to create a function library - in most cases, that library gets shared and used over and over, and doesn't change that often. While you might have some truly unique, project specific nodes, the number is likely pretty small. Thus the recompile issue is not huge IMO. Additionally, any system which allows us to add custom nodes would potentially solve the same issue. The code node is currently cumbersome, not easily re-usable, and has many limitations. I don't personally care if the way to add new nodes is via the node structure or another technique, and while I suspect some of the community would be sad about another solution, for most users it's just a way to get more nodes that encapsulate common tricks. If an API could be exposed that would allow people to add nodes in another way (c#/CG files, etc) and had less restrictions, then that would be preferable.
Tony: I recently went through the same challenge with the normals. What I ended up doing was modifying the TBN in the vertex shader to account for it. I tried several other techniques first, but this seemed the most elegant and least instructions in the end (targeting sm2.0 for some of this). Here's the code, hope it helps:
// Sides that get sampled upside-down (-X, -Y, -Z) need to have their normal map's Y flipped.
// Positive sides end up with +1, while negatives end up with -1.
float lowest = min(min(v.normal.x, v.normal.y), v.normal.z);
// For top and bottom, the tangent points to the right. In all other cases it points straight down.
float3 tangent = float3(abs(v.normal.y), -max(abs(v.normal.x), abs(v.normal.z)), 0.0);
v.tangent = float4(normalize(tangent), floor(lowest) * 2.0 + 1.0);
o.normalDir = mul(half4(v.normal,0), _World2Object).xyz;
o.tangentDir = normalize( mul( _Object2World, half4( v.tangent.xyz, 0.0 ) ).xyz );
o.binormalDir = normalize(cross(o.normalDir, o.tangentDir) * v.tangent.w);
In my case, one was simply sampling another texture seemed to put me over the line (but again, no compiler warnings/errors). What was odd about this was that I was running in SM3, and only had 5 texture samplers other than the offending one. However, swapping that out for a simple color fixed the problem.
Oh, and annoyingly, the fallback shader is not always invoked in this case.
Anyway, this isn't really a shader forge issue - though I suspect it will come up again in the context of SF. Given the complexity of Unity's shader compiler (cross compiling/translating, many platforms, etc), it's also not surprising that there are unfortunate dead ends like this.
Is this something you can likely put in over the next few weeks? If not, I'm going to have to turn a few massive shaders I'm working on into regular code - which I can do - but it would be a drag. Thanks!
https://www.shadertoy.com/view/MsS3Wc
Even if that did work, it wouldn't be sufficient, in that it would be using the world space of the vertex after vertex offset. Having the original, local vertex position allows you to work with things in the original model space, free from scale, position, rotation, and vertex offsets, without trying to undo all of those transformations in the shader.
Customer support service by UserEcho
I agree - this is also the majority of the ways I've seen them used in projects. While it would be really nice if artists used them to share common settings/values and enforce certain consistencies across a project, I've never seen that actually happen in a live project with a significant number of people on it.