Here is my Kaboobly Doo generator: kabooblydoo.appspot.com
Run properly, it should be able to generate grammatical sentences without any coherent meaning.
To demonstrate the power of the same, I showcase some great sentences it generated:
Harsh words are like programmed instructions, given to us when we are young and beautiful.
Preconceived notions are like sharp knives, once you throw them at somebody they can not teach you computer science or hardly anything else.
Every man should be far better neighbors and that an indissoluble law might be more conspicuous to all that took place, said, "On my word, you are doing!"
You are invited to try it out, and yes, Fork me on GitHub!
If you are interested in how this works, have a look at this and read on.
If you have seen my experiment with the iOS predictive keyboard, then it should convince you that this method is indeed a good way to generate convincing but Kaboobly Doo sentences.
Some usages of such an algorithm could be generate pronounceable strings, mimic spam to steganographically conceal data, or generating papers for conferences whose standards are, you suspect, too low.
For those who do not know, markov chains are a list of states equipped with their transition probabilities.
From the given data, we generate a list of prefixes (which are our states) and a list of words that could have followed them. The more a word follows the prefix in the real data, the higher the chance of the transition to the inclusion of the word into the next state.
Suppose we are given the following data:
Are modern calculus books Kaboobly Doo?
Feeding in the data into the chain generator with prefix length gives the following table:
Prefix Suffix Are Are modern Are modern calculus modern calculus books calculus books Kaboobly books Kaboobly Doo? Kaboobly Doo?
Over here, there is only one possible state to which a particular state could transit. However, if we had much larger data, there could be more than two possible suffixes for the same prefix, in which case we could transit to any one of them at random determined the probability of their occurrence rates in the original text.
Once we generate the chain, we randomly choose an initial state and print the corresponding word and then transit to the next state by deleting the first word in the current state and appending a random word as explained above. We do this till we hit the word limit or run out of states.