On Randomness and Hashing Functions | Omar Bohsali

2020-09-04 20:13:29 +0000 -

Since reading Kurzweil in middle school, I’ve been deeply curious about randomness, entropy, and chaos theory. I spent a few years hacking on specifically this for Intel ISEF projects, and for a little while, trolled the sci.crypt.random-numbers newsgroup with questions about random number generation through thermal noise. I think also from a Kurzweil book).

I don’t believe in actual randomness. To take it a step further, I think there is actually a closed-form solution for reversing a hashing function. Not just to predictably generate collisions, but actually, a closed-form solution you can use to reverse a hash.

While I haven’t gone too deep into the subject academically, I still have strong opinions on the matter. Not sure why, but it just makes sense to me that everything is deeply deterministic, and if it’s deterministic, there should be a closed-form solution to go the other way. So maybe this is my case of spreading mis-information.

How do we prove this? I don’t think we have the mathematics to do it yet. But maybe we will, sometime soon.

How could you test this?

Well, given a lot of computing power, you could try a reinforcement learning experiment. This is an interesting field because you don’t have to generate any training data - you can easily compute it yourself. Has anybody done research on this? It seems like the kind of research that definitely seems dumb from the outside, so you’d either have to be a tenured professor, so you don’t care about what other people think.

Or just someone with a basic understanding of Tensorflow and a lot of Azure credits.