Hardware and software environments affect how fast a program runs. Efficiency or time complexity allows us to compare algorithms in an environment-agnostic manner and is measured in terms of operational steps taken and space required.
Despite advances in computing hardware, tasks such as factoring still take an unwieldy/prohibitive amount of time. In fact, "A faster algorithm running on a slower computer will always win for sufficiently large instances. [...] Usually, problems don’t have to get that large before the faster algorithm wins". "In order to make most effective use of our computational resources, it's important that we have the skill set to analyze the complexity of algorithms, so we know what resources those algorithms require. Being able to analyze an algorithm allows us to have an idea of how well it scales as we throw larger and larger data sets at it."
Time complexity is described using asymptotic notation:
- Big-Theta: asymptotically tight bound
- Big-Omega: asymptotic lower bound or best-case scenario
- Big O: asymptotic upper bound or worst-case scenario
and is primarily discussed in terms of Big O notation.
See: https://learnxinyminutes.com/docs/asymptotic-notation/
For example, linear searching of an array is Ω(1) for Big-Omega (finds it upon inspecting the first element) and and O(n) for Big O (never finds it).
Different Big O runtimes to measure algorithms include:
Check this helpful stack overflow post for a description of some of these.
- Add different steps
- Drop constants
- Different inputs --> different variables
- Drop non-dominant terms
Watch: Big O Notation
- Recursive time complexity