HFT is implemented via computer algorithms that take market data (trades and orders) as input, process it based on statistical arbitrage algorithms, and issue trading orders as output. Statistical arbitrage is a process of discerning and exploiting statistical patterns in the market data. What makes these algorithms high-frequency, as opposed to any other trading algorithm, is the quick (sub-second) turn-around time between the occurrence of the input data and the output orders. Driven by increases in computing power and network bandwidths, the technological arms race between profit-seeking firms has driven the turn-around times down to the order of micro-seconds.
Why is speed important to a HFT firm? Assume a pattern has been discerned in the behavior of IBM and MSFT shares by which a move in the price of MSFT (and IBM) is expected to be more likely over the next few seconds/minutes given a certain input of IBM and MSFT market data (trades and orders). Whoever is the fastest in (1) acquiring the input data, (2) processing it to establish the existence of the pattern, and (3) sending the orders out to take advantage of the discerned statistical edge, will take the largest piece of the profits and, at the same time, diminish or extinguish the statistical anomaly by its actions. This is why constant investment towards lower latency (smaller delay in communicating market data and orders between the HFT algorithm and the exchanges) is necessary for maintaining competitiveness.