summaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
-rw-r--r--asymptotic-notation.html.markdown121
1 files changed, 121 insertions, 0 deletions
diff --git a/asymptotic-notation.html.markdown b/asymptotic-notation.html.markdown
new file mode 100644
index 00000000..7b4bf8a9
--- /dev/null
+++ b/asymptotic-notation.html.markdown
@@ -0,0 +1,121 @@
+---
+category: Algorithms & Data Structures
+contributors:
+ - ["Jake Prather", "http://github.com/JakeHP"]
+---
+
+# Asymptotic Notations
+
+## What are they?
+
+Asymptotic Notations is a language that allows us to analyze an algorithm's running time by
+identifying its behavior as the input size for the algorithm increases. This is also known as
+an algorithm's growth rate. Does the algorithm suddenly become incredibly slow when the input
+size grows? Does the algorithm mostly maintain it's quick run time as the input size increases?
+Asymptotic Notation gives us the ability to answer these questions.
+
+## Are there alternatives to answering these questions?
+
+One way would be to count the number of primitive operations at different input sizes.
+Though this is a valid solution, the amount of work this takes for even simple algorithms
+does not justify its use.
+
+Another way is to physically measure the amount of time the algorithm takes to complete
+given different input sizes. However, the accuracy and relativity (times obtained would
+only be relative to the machine they were computed on) of this method is bound to
+environmental variables such as computer hardware specifications, processing power, etc.
+
+## Types of Asymptotic Notation
+
+In the first section of this doc we described how an Asymptotic Notation identifies the
+behavior of an algorithm as the input size changes. Let us imagine our algorithm as a function
+f, n as the input size, and f(n) being the running time. So for a given algorithm f, with input
+size n you get some resultant run time f(n). This results in a graph where the Y axis is the
+runtime, X axis is the input size, and plot points are the resultants of the amount of time
+for a given input size.
+
+You can label a function, or algorithm, with an Asymptotic Notation in many different ways.
+Some examples are, you can describe your algorithm by it's best case, worse case, or equivalent case.
+The most common is to analyze your algorithm by it's worst case. This is because if you determine an
+algorithm's run time or time complexity, by it's best case, what if it's best case is only obtained
+given at a low, unrealistic, input size? It is equivalent to having a 5 meter sprinting race.
+That isn't the best measurement.
+
+### Types of functions, limits, and simplification
+Logarithmic Function - log n
+Linear Function - an + b
+Quadratic Function - an^2 + bn + c
+Polynomial Function - an^z + . . . + an^2 + a*n^1 + a*n^0, where z is some constant
+Exponential Function - a^n, where a is some constant
+
+These are some basic function growth classifications used in various notations. The list starts at the least
+fast growing function (logarithmic) and goes on to the fastest growing (exponential). Notice that as 'n', or the input,
+increases in each of those functions, the result clearly increases much quicker in quadratic, polynomial, and
+exponential, compared to logarithmic and linear.
+
+One extremely important note is that for the notations about to be discussed you should do your best to use simplest terms.
+This means to disregard constants, and lower order terms, because as the input size (or n in our f(n)
+example) increases to infinity (mathematical limits), the lower order terms and constants are of little
+to no importance. That being said, if you have constants that are 2^9001, or some other ridiculous
+unimaginable amount, realize that simplifying will skew your notation accuracy.
+
+Since we want simplest form, lets modify our table a bit...
+Logarithmic - log n
+Linear - n
+Quadratic - n^2
+Polynomial - n^z, where z is some constant
+Exponential - a^n, where a is some constant
+
+### Big-Oh
+Big-Oh, commonly written as O, is an Asymptotic Notation for the worst case, or ceiling of growth
+for a given function. Say f(n) is your algorithm runtime, and g(n) is an arbitrary time complexity
+you are trying to relate to your algorithm. f(n) is O(g(n)), if for any real constant c (c>0),
+f(n) <= c g(n) for every input size n (n>0).
+
+Example 1
+f(n) = 3log n + 100
+g(n) = log n
+
+is f(n) O(g(n))?
+is 3 log n + 100 O(log n)?
+Let's look to the definition of Big-Oh.
+3log n + 100 <= c * log n
+Is there some constant c that satisfies this for all n?
+3log n + 100 <= 150 * log n, n > 2 (undefined at n = 1)
+Yes! The definition of Big-Oh has been met therefore f(n) is O(g(n)).
+
+Example 2
+f(n) = 3*n^2
+g(n) = n
+
+is f(n) O(g(n))?
+is 3*n^2 O(n)?
+Let's look at the definition of Big-Oh.
+3*n^2 <= c * n
+Is there some constant c that satisfies this for all n?
+No there isn't, f(n) is NOT O(g(n)).
+
+### Big-Omega
+Big-Omega, commonly written as Ω, is an Asymptotic Notation for the best case, or a floor growth rate
+for a given function.
+
+f(n) is Ω(g(n)), if for any real constant c (c>0), f(n) is >= c g(n) for every input size n (n>0).
+
+Feel free to head over to additional resources for examples on this. Big-Oh is the primary notation used
+for general algorithm time complexity.
+
+### Ending Note
+It's hard to keep this kind of topic short and you should definitely go through the books and online
+resources listed. They go into much greater depth with definitions and examples.
+More where x='Algorithms & Data Structures' is on it's way; we'll have a doc up on analyzing actual
+code examples soon.
+
+## Books
+
+* [Algorithms](http://www.amazon.com/Algorithms-4th-Robert-Sedgewick/dp/032157351X)
+* [Algorithm Design](http://www.amazon.com/Algorithm-Design-Foundations-Analysis-Internet/dp/0471383651)
+
+## Online Resources
+
+* [MIT](http://web.mit.edu/16.070/www/lecture/big_o.pdf)
+* [KhanAcademy](https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/asymptotic-notation) \ No newline at end of file