Highest Rated Comments


serifmasterrace659 karma

If you think my top is cute, you cannot execute!

serifmasterrace24 karma

overseas chinese traveling to southeast Asia soon. Hoping to leave a good impression!

serifmasterrace13 karma

Like Crazy Dave. But with a pot on his head

serifmasterrace4 karma

If all the nodes are linear operations, the function that the tree is modeling can be collapsed into the form wX+b.

Then we’d just be solving least squares with extra steps right? There’s already a fast analytical solution. Or is there something else I’m missing something here?

serifmasterrace3 karma

Any combination of linear operators can be collapsed into the form wX+b.

For example, if you have a tree representing (2X[1]+ 3X[2]) * 4 + 5, it's no different from wX+b where X = matrix([X[1], X[2]]), w = [8,12], b = 5.

max(a,b) is just a constrained linear program.

e^x and x^i are nonlinear, which are operations represented by activations in neural nets.

Your tree is creating some extra linear operations that could be simplified down to greatly improve runtime. Maybe try that, but the solution space being learned won't be different from that of a neural net