Science Fair Project Encyclopedia
A random oracle is a theoretical model of a perfect cryptographic hash function. It is used in proofs that indicate that cryptographic systems or protocols are secure by showing that an attacker must either consider how the hash function works, or solve some other problem believed hard, in order to break the protocol.
When a random oracle is given a query x it does the following:
- If the oracle has been given the query x before it responds with the same value it gave the last time.
- If the oracle hasn't been given the query x before it generates a random response which has uniform probability of being chosen from anywhere in the oracle's output domain.
No real hash function can implement a true random oracle. In fact, certain very artificial protocols have been constructed which are proven secure in the random oracle model, but which are trivially insecure when any real hash function is substituted for the random oracle. Nonetheless, for any more natural protocol a proof of security in the random oracle gives very strong evidence that an attack which does not break the other assumptions of the proof, if any (such as the hardness of integer factorization) must discover some unknown and undesirable property of the hash function used in the protocol to work.
- Mihir Bellare and Phillip Rogaway, Random Oracles are Practical: A Paradigm for Designing Efficient Protocols, ACM Conference on Computer and Communications Security 1993, pp62–73 (PS and PDF).
- Ran Canetti, Oded Goldreich and Shai Halevi, The Random Oracle Methodology Revisited, STOC 1998, pp209–218 .
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details