Trading used to take place primarily at physical stock exchanges, with traders shouting prices and quantities into a sea of bodies.
As the trading world - like the rest of the world - has moved digital, the game has changed. While financial “experts” can still provide insights and make key decisions, the new heroes are math, statistics, and computer science.
The massive amount of trading data - and the frequency at which trading occurs - is a major reason why quantitative methods and automated algorithms have become a bedrock of Wall Street.
The New York Stock Exchange (NYSE) sees 2-3 million separate trades on a standard day. Given that trading hours are 9:30am to 4:00pm (23,400 seconds), about how many trades (on average) are occurring per second on the NYSE?
Stocks and other assets can be modeled with probability distributions. The graph above shows the stock price of Tesla (TSLA) over a one-year period, ending with the stock around $200. Which of the following probability density functions is most reasonable model for the price of TSLA one year later?
Under “normal” conditions (e.g., not around earnings reports), it is often reasonable to model a stock price as a random variable. With no information about stock X, which of the following is a better model?
A: Stock X is multiplied by \(1.001\) or divided by \(1.001\) each minute with equal probability.
B: Stock X increases by $1 or decreases by $1 each minute with equal probability.
A bank in a small town is attempting to model its risks from the 100 home mortgages it has provided. It estimates that a given person has about a 1% chance of defaulting on the mortgage (not paying). What is the probability that no one defaults, assuming that default is independent across the homeowners?
With any probabilistic model, the usefulness of the model is dependent on the validity of its underlying assumptions. In the previous question, the bank assumed that default was independent across the 100 homeowners in the small town. Is this a reasonable assumption for this model?