From eee42da220d800d6f5bf384c5a13fe8be66bddb9 Mon Sep 17 00:00:00 2001 From: Jake Prather Date: Sat, 31 Jan 2015 21:16:20 -0600 Subject: asymptotic notation rough draft --- asymptotic-notation.html.markdown | 121 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 asymptotic-notation.html.markdown (limited to 'asymptotic-notation.html.markdown') diff --git a/asymptotic-notation.html.markdown b/asymptotic-notation.html.markdown new file mode 100644 index 00000000..7b4bf8a9 --- /dev/null +++ b/asymptotic-notation.html.markdown @@ -0,0 +1,121 @@ +--- +category: Algorithms & Data Structures +contributors: + - ["Jake Prather", "http://github.com/JakeHP"] +--- + +# Asymptotic Notations + +## What are they? + +Asymptotic Notations is a language that allows us to analyze an algorithm's running time by +identifying its behavior as the input size for the algorithm increases. This is also known as +an algorithm's growth rate. Does the algorithm suddenly become incredibly slow when the input +size grows? Does the algorithm mostly maintain it's quick run time as the input size increases? +Asymptotic Notation gives us the ability to answer these questions. + +## Are there alternatives to answering these questions? + +One way would be to count the number of primitive operations at different input sizes. +Though this is a valid solution, the amount of work this takes for even simple algorithms +does not justify its use. + +Another way is to physically measure the amount of time the algorithm takes to complete +given different input sizes. However, the accuracy and relativity (times obtained would +only be relative to the machine they were computed on) of this method is bound to +environmental variables such as computer hardware specifications, processing power, etc. + +## Types of Asymptotic Notation + +In the first section of this doc we described how an Asymptotic Notation identifies the +behavior of an algorithm as the input size changes. Let us imagine our algorithm as a function +f, n as the input size, and f(n) being the running time. So for a given algorithm f, with input +size n you get some resultant run time f(n). This results in a graph where the Y axis is the +runtime, X axis is the input size, and plot points are the resultants of the amount of time +for a given input size. + +You can label a function, or algorithm, with an Asymptotic Notation in many different ways. +Some examples are, you can describe your algorithm by it's best case, worse case, or equivalent case. +The most common is to analyze your algorithm by it's worst case. This is because if you determine an +algorithm's run time or time complexity, by it's best case, what if it's best case is only obtained +given at a low, unrealistic, input size? It is equivalent to having a 5 meter sprinting race. +That isn't the best measurement. + +### Types of functions, limits, and simplification +Logarithmic Function - log n +Linear Function - an + b +Quadratic Function - an^2 + bn + c +Polynomial Function - an^z + . . . + an^2 + a*n^1 + a*n^0, where z is some constant +Exponential Function - a^n, where a is some constant + +These are some basic function growth classifications used in various notations. The list starts at the least +fast growing function (logarithmic) and goes on to the fastest growing (exponential). Notice that as 'n', or the input, +increases in each of those functions, the result clearly increases much quicker in quadratic, polynomial, and +exponential, compared to logarithmic and linear. + +One extremely important note is that for the notations about to be discussed you should do your best to use simplest terms. +This means to disregard constants, and lower order terms, because as the input size (or n in our f(n) +example) increases to infinity (mathematical limits), the lower order terms and constants are of little +to no importance. That being said, if you have constants that are 2^9001, or some other ridiculous +unimaginable amount, realize that simplifying will skew your notation accuracy. + +Since we want simplest form, lets modify our table a bit... +Logarithmic - log n +Linear - n +Quadratic - n^2 +Polynomial - n^z, where z is some constant +Exponential - a^n, where a is some constant + +### Big-Oh +Big-Oh, commonly written as O, is an Asymptotic Notation for the worst case, or ceiling of growth +for a given function. Say f(n) is your algorithm runtime, and g(n) is an arbitrary time complexity +you are trying to relate to your algorithm. f(n) is O(g(n)), if for any real constant c (c>0), +f(n) <= c g(n) for every input size n (n>0). + +Example 1 +f(n) = 3log n + 100 +g(n) = log n + +is f(n) O(g(n))? +is 3 log n + 100 O(log n)? +Let's look to the definition of Big-Oh. +3log n + 100 <= c * log n +Is there some constant c that satisfies this for all n? +3log n + 100 <= 150 * log n, n > 2 (undefined at n = 1) +Yes! The definition of Big-Oh has been met therefore f(n) is O(g(n)). + +Example 2 +f(n) = 3*n^2 +g(n) = n + +is f(n) O(g(n))? +is 3*n^2 O(n)? +Let's look at the definition of Big-Oh. +3*n^2 <= c * n +Is there some constant c that satisfies this for all n? +No there isn't, f(n) is NOT O(g(n)). + +### Big-Omega +Big-Omega, commonly written as Ω, is an Asymptotic Notation for the best case, or a floor growth rate +for a given function. + +f(n) is Ω(g(n)), if for any real constant c (c>0), f(n) is >= c g(n) for every input size n (n>0). + +Feel free to head over to additional resources for examples on this. Big-Oh is the primary notation used +for general algorithm time complexity. + +### Ending Note +It's hard to keep this kind of topic short and you should definitely go through the books and online +resources listed. They go into much greater depth with definitions and examples. +More where x='Algorithms & Data Structures' is on it's way; we'll have a doc up on analyzing actual +code examples soon. + +## Books + +* [Algorithms](http://www.amazon.com/Algorithms-4th-Robert-Sedgewick/dp/032157351X) +* [Algorithm Design](http://www.amazon.com/Algorithm-Design-Foundations-Analysis-Internet/dp/0471383651) + +## Online Resources + +* [MIT](http://web.mit.edu/16.070/www/lecture/big_o.pdf) +* [KhanAcademy](https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/asymptotic-notation) \ No newline at end of file -- cgit v1.2.3 From b4b2eb214364ef349029523c8e99c0cde79288b0 Mon Sep 17 00:00:00 2001 From: Jake Prather Date: Sat, 31 Jan 2015 21:22:30 -0600 Subject: grammar --- asymptotic-notation.html.markdown | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) (limited to 'asymptotic-notation.html.markdown') diff --git a/asymptotic-notation.html.markdown b/asymptotic-notation.html.markdown index 7b4bf8a9..e9966239 100644 --- a/asymptotic-notation.html.markdown +++ b/asymptotic-notation.html.markdown @@ -8,7 +8,7 @@ contributors: ## What are they? -Asymptotic Notations is a language that allows us to analyze an algorithm's running time by +Asymptotic Notations are languages that allows us to analyze an algorithm's running time by identifying its behavior as the input size for the algorithm increases. This is also known as an algorithm's growth rate. Does the algorithm suddenly become incredibly slow when the input size grows? Does the algorithm mostly maintain it's quick run time as the input size increases? @@ -42,11 +42,13 @@ given at a low, unrealistic, input size? It is equivalent to having a 5 meter sp That isn't the best measurement. ### Types of functions, limits, and simplification +``` Logarithmic Function - log n Linear Function - an + b Quadratic Function - an^2 + bn + c Polynomial Function - an^z + . . . + an^2 + a*n^1 + a*n^0, where z is some constant Exponential Function - a^n, where a is some constant +``` These are some basic function growth classifications used in various notations. The list starts at the least fast growing function (logarithmic) and goes on to the fastest growing (exponential). Notice that as 'n', or the input, @@ -60,11 +62,13 @@ to no importance. That being said, if you have constants that are 2^9001, or som unimaginable amount, realize that simplifying will skew your notation accuracy. Since we want simplest form, lets modify our table a bit... +``` Logarithmic - log n Linear - n Quadratic - n^2 Polynomial - n^z, where z is some constant Exponential - a^n, where a is some constant +``` ### Big-Oh Big-Oh, commonly written as O, is an Asymptotic Notation for the worst case, or ceiling of growth @@ -104,8 +108,8 @@ f(n) is Ω(g(n)), if for any real constant c (c>0), f(n) is >= c g(n) for every Feel free to head over to additional resources for examples on this. Big-Oh is the primary notation used for general algorithm time complexity. -### Ending Note -It's hard to keep this kind of topic short and you should definitely go through the books and online +### Ending Notes +It's hard to keep this kind of topic short, and you should definitely go through the books and online resources listed. They go into much greater depth with definitions and examples. More where x='Algorithms & Data Structures' is on it's way; we'll have a doc up on analyzing actual code examples soon. @@ -118,4 +122,4 @@ code examples soon. ## Online Resources * [MIT](http://web.mit.edu/16.070/www/lecture/big_o.pdf) -* [KhanAcademy](https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/asymptotic-notation) \ No newline at end of file +* [KhanAcademy](https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/asymptotic-notation) -- cgit v1.2.3 From ef57fcd9a83e171144abb3bdd8b9a3f88c12d2ee Mon Sep 17 00:00:00 2001 From: Jake Prather Date: Sun, 1 Feb 2015 11:06:06 -0600 Subject: more grammar --- asymptotic-notation.html.markdown | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) (limited to 'asymptotic-notation.html.markdown') diff --git a/asymptotic-notation.html.markdown b/asymptotic-notation.html.markdown index e9966239..ea23b19a 100644 --- a/asymptotic-notation.html.markdown +++ b/asymptotic-notation.html.markdown @@ -11,7 +11,7 @@ contributors: Asymptotic Notations are languages that allows us to analyze an algorithm's running time by identifying its behavior as the input size for the algorithm increases. This is also known as an algorithm's growth rate. Does the algorithm suddenly become incredibly slow when the input -size grows? Does the algorithm mostly maintain it's quick run time as the input size increases? +size grows? Does it mostly maintain it's quick run time as the input size increases? Asymptotic Notation gives us the ability to answer these questions. ## Are there alternatives to answering these questions? @@ -20,7 +20,7 @@ One way would be to count the number of primitive operations at different input Though this is a valid solution, the amount of work this takes for even simple algorithms does not justify its use. -Another way is to physically measure the amount of time the algorithm takes to complete +Another way is to physically measure the amount of time an algorithm takes to complete given different input sizes. However, the accuracy and relativity (times obtained would only be relative to the machine they were computed on) of this method is bound to environmental variables such as computer hardware specifications, processing power, etc. @@ -28,15 +28,15 @@ environmental variables such as computer hardware specifications, processing pow ## Types of Asymptotic Notation In the first section of this doc we described how an Asymptotic Notation identifies the -behavior of an algorithm as the input size changes. Let us imagine our algorithm as a function +behavior of an algorithm as the input size changes. Let us imagine an algorithm as a function f, n as the input size, and f(n) being the running time. So for a given algorithm f, with input size n you get some resultant run time f(n). This results in a graph where the Y axis is the runtime, X axis is the input size, and plot points are the resultants of the amount of time for a given input size. You can label a function, or algorithm, with an Asymptotic Notation in many different ways. -Some examples are, you can describe your algorithm by it's best case, worse case, or equivalent case. -The most common is to analyze your algorithm by it's worst case. This is because if you determine an +Some examples are, you can describe an algorithm by it's best case, worse case, or equivalent case. +The most common is to analyze an algorithm by it's worst case. This is because if you determine an algorithm's run time or time complexity, by it's best case, what if it's best case is only obtained given at a low, unrealistic, input size? It is equivalent to having a 5 meter sprinting race. That isn't the best measurement. -- cgit v1.2.3 From 4380245292e64ca22cd5cc6927ac146815914681 Mon Sep 17 00:00:00 2001 From: Jake Prather Date: Sun, 1 Feb 2015 11:13:22 -0600 Subject: formatting --- asymptotic-notation.html.markdown | 34 +++++++++++++++++++--------------- 1 file changed, 19 insertions(+), 15 deletions(-) (limited to 'asymptotic-notation.html.markdown') diff --git a/asymptotic-notation.html.markdown b/asymptotic-notation.html.markdown index ea23b19a..ba665a95 100644 --- a/asymptotic-notation.html.markdown +++ b/asymptotic-notation.html.markdown @@ -76,27 +76,31 @@ for a given function. Say f(n) is your algorithm runtime, and g(n) is an arbitra you are trying to relate to your algorithm. f(n) is O(g(n)), if for any real constant c (c>0), f(n) <= c g(n) for every input size n (n>0). -Example 1 -f(n) = 3log n + 100 +*Example 1* +``` +f(n) = 3log n + 100 g(n) = log n +``` -is f(n) O(g(n))? -is 3 log n + 100 O(log n)? -Let's look to the definition of Big-Oh. -3log n + 100 <= c * log n -Is there some constant c that satisfies this for all n? -3log n + 100 <= 150 * log n, n > 2 (undefined at n = 1) +is f(n) O(g(n))? +is 3 log n + 100 O(log n)? +Let's look to the definition of Big-Oh. +3log n + 100 <= c * log n +Is there some constant c that satisfies this for all n? +3log n + 100 <= 150 * log n, n > 2 (undefined at n = 1) Yes! The definition of Big-Oh has been met therefore f(n) is O(g(n)). -Example 2 -f(n) = 3*n^2 +*Example 2* +``` +f(n) = 3*n^2 g(n) = n +``` -is f(n) O(g(n))? -is 3*n^2 O(n)? -Let's look at the definition of Big-Oh. -3*n^2 <= c * n -Is there some constant c that satisfies this for all n? +is f(n) O(g(n))? +is 3*n^2 O(n)? +Let's look at the definition of Big-Oh. +3*n^2 <= c * n +Is there some constant c that satisfies this for all n? No there isn't, f(n) is NOT O(g(n)). ### Big-Omega -- cgit v1.2.3 From 26c7b45feb80bf1aa456e8e75a6087288e57ca10 Mon Sep 17 00:00:00 2001 From: Jake Prather Date: Sun, 1 Feb 2015 11:14:48 -0600 Subject: formatting - removed italics --- asymptotic-notation.html.markdown | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'asymptotic-notation.html.markdown') diff --git a/asymptotic-notation.html.markdown b/asymptotic-notation.html.markdown index ba665a95..3ed1a25a 100644 --- a/asymptotic-notation.html.markdown +++ b/asymptotic-notation.html.markdown @@ -97,9 +97,9 @@ g(n) = n ``` is f(n) O(g(n))? -is 3*n^2 O(n)? +is 3 * n^2 O(n)? Let's look at the definition of Big-Oh. -3*n^2 <= c * n +3 * n^2 <= c * n Is there some constant c that satisfies this for all n? No there isn't, f(n) is NOT O(g(n)). -- cgit v1.2.3 From 287133fc49b51df19c2da24b49266085a19e8268 Mon Sep 17 00:00:00 2001 From: Jake Prather Date: Sun, 1 Feb 2015 11:27:32 -0600 Subject: Added to 'why use worst case'. Formatting changes. --- asymptotic-notation.html.markdown | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) (limited to 'asymptotic-notation.html.markdown') diff --git a/asymptotic-notation.html.markdown b/asymptotic-notation.html.markdown index 3ed1a25a..f09d4d27 100644 --- a/asymptotic-notation.html.markdown +++ b/asymptotic-notation.html.markdown @@ -38,8 +38,9 @@ You can label a function, or algorithm, with an Asymptotic Notation in many diff Some examples are, you can describe an algorithm by it's best case, worse case, or equivalent case. The most common is to analyze an algorithm by it's worst case. This is because if you determine an algorithm's run time or time complexity, by it's best case, what if it's best case is only obtained -given at a low, unrealistic, input size? It is equivalent to having a 5 meter sprinting race. -That isn't the best measurement. +given at a low, unrealistic, input size? It is equivalent to having a 5 meter sprinting race to find +the fastest sprinter on earth. Or testing the 40 to 50MPH time of a car to determine the fastest car +in the world. The measurement loses meaning because it doesn't represent the problem well. ### Types of functions, limits, and simplification ``` @@ -85,9 +86,13 @@ g(n) = log n is f(n) O(g(n))? is 3 log n + 100 O(log n)? Let's look to the definition of Big-Oh. +``` 3log n + 100 <= c * log n +``` Is there some constant c that satisfies this for all n? +``` 3log n + 100 <= 150 * log n, n > 2 (undefined at n = 1) +``` Yes! The definition of Big-Oh has been met therefore f(n) is O(g(n)). *Example 2* @@ -99,7 +104,9 @@ g(n) = n is f(n) O(g(n))? is 3 * n^2 O(n)? Let's look at the definition of Big-Oh. +``` 3 * n^2 <= c * n +``` Is there some constant c that satisfies this for all n? No there isn't, f(n) is NOT O(g(n)). -- cgit v1.2.3 From a5283ebff1ab87c37efc5e83867070aeea743427 Mon Sep 17 00:00:00 2001 From: Graham Mueller Date: Sun, 1 Feb 2015 11:49:19 -0600 Subject: Update asymptotic-notation.html.markdown Fixing some grammar and incorrect usage of it's. Replacing description of why not to use best case. Changing some formatting to hopefully make it read a little easier. --- asymptotic-notation.html.markdown | 46 ++++++++++++++++----------------------- 1 file changed, 19 insertions(+), 27 deletions(-) (limited to 'asymptotic-notation.html.markdown') diff --git a/asymptotic-notation.html.markdown b/asymptotic-notation.html.markdown index f09d4d27..50fa6f4f 100644 --- a/asymptotic-notation.html.markdown +++ b/asymptotic-notation.html.markdown @@ -8,10 +8,10 @@ contributors: ## What are they? -Asymptotic Notations are languages that allows us to analyze an algorithm's running time by +Asymptotic Notations are languages that allow us to analyze an algorithm's running time by identifying its behavior as the input size for the algorithm increases. This is also known as an algorithm's growth rate. Does the algorithm suddenly become incredibly slow when the input -size grows? Does it mostly maintain it's quick run time as the input size increases? +size grows? Does it mostly maintain its quick run time as the input size increases? Asymptotic Notation gives us the ability to answer these questions. ## Are there alternatives to answering these questions? @@ -35,12 +35,8 @@ runtime, X axis is the input size, and plot points are the resultants of the amo for a given input size. You can label a function, or algorithm, with an Asymptotic Notation in many different ways. -Some examples are, you can describe an algorithm by it's best case, worse case, or equivalent case. -The most common is to analyze an algorithm by it's worst case. This is because if you determine an -algorithm's run time or time complexity, by it's best case, what if it's best case is only obtained -given at a low, unrealistic, input size? It is equivalent to having a 5 meter sprinting race to find -the fastest sprinter on earth. Or testing the 40 to 50MPH time of a car to determine the fastest car -in the world. The measurement loses meaning because it doesn't represent the problem well. +Some examples are, you can describe an algorithm by its best case, worse case, or equivalent case. +The most common is to analyze an algorithm by its worst case. You typically don't evaluate by best case because those conditions aren't what you're planning for. A very good example of this is sorting algorithms; specifically, adding elements to a tree structure. Best case for most algorithms could be as low as a single operation. However, in most cases, the element you're adding will need to be sorted appropriately through the tree, which could mean examining an entire branch. This is the worst case, and this is what we plan for. ### Types of functions, limits, and simplification ``` @@ -51,15 +47,11 @@ Polynomial Function - an^z + . . . + an^2 + a*n^1 + a*n^0, where z is some const Exponential Function - a^n, where a is some constant ``` -These are some basic function growth classifications used in various notations. The list starts at the least -fast growing function (logarithmic) and goes on to the fastest growing (exponential). Notice that as 'n', or the input, -increases in each of those functions, the result clearly increases much quicker in quadratic, polynomial, and -exponential, compared to logarithmic and linear. +These are some basic function growth classifications used in various notations. The list starts at the slowest growing function (logarithmic, fastest execution time) and goes on to the fastest growing (exponential, slowest execution time). Notice that as 'n', or the input, increases in each of those functions, the result clearly increases much quicker in quadratic, polynomial, and exponential, compared to logarithmic and linear. -One extremely important note is that for the notations about to be discussed you should do your best to use simplest terms. -This means to disregard constants, and lower order terms, because as the input size (or n in our f(n) +One extremely important note is that for the notations about to be discussed you should do your best to use simplest terms. This means to disregard constants, and lower order terms, because as the input size (or n in our f(n) example) increases to infinity (mathematical limits), the lower order terms and constants are of little -to no importance. That being said, if you have constants that are 2^9001, or some other ridiculous +to no importance. That being said, if you have constants that are 2^9001, or some other ridiculous, unimaginable amount, realize that simplifying will skew your notation accuracy. Since we want simplest form, lets modify our table a bit... @@ -73,9 +65,9 @@ Exponential - a^n, where a is some constant ### Big-Oh Big-Oh, commonly written as O, is an Asymptotic Notation for the worst case, or ceiling of growth -for a given function. Say f(n) is your algorithm runtime, and g(n) is an arbitrary time complexity -you are trying to relate to your algorithm. f(n) is O(g(n)), if for any real constant c (c>0), -f(n) <= c g(n) for every input size n (n>0). +for a given function. Say `f(n)` is your algorithm runtime, and `g(n)` is an arbitrary time complexity +you are trying to relate to your algorithm. `f(n)` is O(g(n)), if for any real constant c (c > 0), +`f(n)` <= `c g(n)` for every input size n (n > 0). *Example 1* ``` @@ -83,9 +75,9 @@ f(n) = 3log n + 100 g(n) = log n ``` -is f(n) O(g(n))? -is 3 log n + 100 O(log n)? -Let's look to the definition of Big-Oh. +Is `f(n)` O(g(n))? +Is `3 log n + 100` O(log n)? +Let's look to the definition of Big-Oh. ``` 3log n + 100 <= c * log n ``` @@ -93,7 +85,7 @@ Is there some constant c that satisfies this for all n? ``` 3log n + 100 <= 150 * log n, n > 2 (undefined at n = 1) ``` -Yes! The definition of Big-Oh has been met therefore f(n) is O(g(n)). +Yes! The definition of Big-Oh has been met therefore `f(n)` is O(g(n)). *Example 2* ``` @@ -101,20 +93,20 @@ f(n) = 3*n^2 g(n) = n ``` -is f(n) O(g(n))? -is 3 * n^2 O(n)? +Is `f(n)` O(g(n))? +Is `3 * n^2` O(n)? Let's look at the definition of Big-Oh. ``` 3 * n^2 <= c * n ``` Is there some constant c that satisfies this for all n? -No there isn't, f(n) is NOT O(g(n)). +No, there isn't. `f(n)` is NOT O(g(n)). ### Big-Omega Big-Omega, commonly written as Ω, is an Asymptotic Notation for the best case, or a floor growth rate for a given function. -f(n) is Ω(g(n)), if for any real constant c (c>0), f(n) is >= c g(n) for every input size n (n>0). +`f(n)` is Ω(g(n)), if for any real constant c (c > 0), `f(n)` is >= `c g(n)` for every input size n (n > 0). Feel free to head over to additional resources for examples on this. Big-Oh is the primary notation used for general algorithm time complexity. @@ -122,7 +114,7 @@ for general algorithm time complexity. ### Ending Notes It's hard to keep this kind of topic short, and you should definitely go through the books and online resources listed. They go into much greater depth with definitions and examples. -More where x='Algorithms & Data Structures' is on it's way; we'll have a doc up on analyzing actual +More where x='Algorithms & Data Structures' is on its way; we'll have a doc up on analyzing actual code examples soon. ## Books -- cgit v1.2.3 From 44e0d3a85942031228fa0f942c245ac4584e3e47 Mon Sep 17 00:00:00 2001 From: Graham Mueller Date: Sun, 1 Feb 2015 11:50:48 -0600 Subject: Update asymptotic-notation.html.markdown Fixing newlines hopefully. Hard to tell in the editor. --- asymptotic-notation.html.markdown | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'asymptotic-notation.html.markdown') diff --git a/asymptotic-notation.html.markdown b/asymptotic-notation.html.markdown index 50fa6f4f..c8187526 100644 --- a/asymptotic-notation.html.markdown +++ b/asymptotic-notation.html.markdown @@ -75,8 +75,8 @@ f(n) = 3log n + 100 g(n) = log n ``` -Is `f(n)` O(g(n))? -Is `3 log n + 100` O(log n)? +Is `f(n)` O(g(n))? +Is `3 log n + 100` O(log n)? Let's look to the definition of Big-Oh. ``` 3log n + 100 <= c * log n -- cgit v1.2.3 From 46fce8da7c5ea72f1e227e5d0653413145c19991 Mon Sep 17 00:00:00 2001 From: Adam Date: Tue, 3 Feb 2015 20:26:54 +0000 Subject: Added name to asymptotic notation article --- asymptotic-notation.html.markdown | 1 + 1 file changed, 1 insertion(+) (limited to 'asymptotic-notation.html.markdown') diff --git a/asymptotic-notation.html.markdown b/asymptotic-notation.html.markdown index c8187526..901494ab 100644 --- a/asymptotic-notation.html.markdown +++ b/asymptotic-notation.html.markdown @@ -1,5 +1,6 @@ --- category: Algorithms & Data Structures +name: Asymptotic Notation contributors: - ["Jake Prather", "http://github.com/JakeHP"] --- -- cgit v1.2.3 From 24b6ed0b8215e4013ef0f6f93ff52ad5515c8144 Mon Sep 17 00:00:00 2001 From: Adam Bard Date: Wed, 4 Feb 2015 03:33:53 +0000 Subject: More line breaks for happy code blocks. --- asymptotic-notation.html.markdown | 10 ++++++++++ 1 file changed, 10 insertions(+) (limited to 'asymptotic-notation.html.markdown') diff --git a/asymptotic-notation.html.markdown b/asymptotic-notation.html.markdown index 901494ab..deb3e37d 100644 --- a/asymptotic-notation.html.markdown +++ b/asymptotic-notation.html.markdown @@ -40,6 +40,7 @@ Some examples are, you can describe an algorithm by its best case, worse case, o The most common is to analyze an algorithm by its worst case. You typically don't evaluate by best case because those conditions aren't what you're planning for. A very good example of this is sorting algorithms; specifically, adding elements to a tree structure. Best case for most algorithms could be as low as a single operation. However, in most cases, the element you're adding will need to be sorted appropriately through the tree, which could mean examining an entire branch. This is the worst case, and this is what we plan for. ### Types of functions, limits, and simplification + ``` Logarithmic Function - log n Linear Function - an + b @@ -56,6 +57,7 @@ to no importance. That being said, if you have constants that are 2^9001, or som unimaginable amount, realize that simplifying will skew your notation accuracy. Since we want simplest form, lets modify our table a bit... + ``` Logarithmic - log n Linear - n @@ -71,6 +73,7 @@ you are trying to relate to your algorithm. `f(n)` is O(g(n)), if for any real c `f(n)` <= `c g(n)` for every input size n (n > 0). *Example 1* + ``` f(n) = 3log n + 100 g(n) = log n @@ -79,16 +82,21 @@ g(n) = log n Is `f(n)` O(g(n))? Is `3 log n + 100` O(log n)? Let's look to the definition of Big-Oh. + ``` 3log n + 100 <= c * log n ``` + Is there some constant c that satisfies this for all n? + ``` 3log n + 100 <= 150 * log n, n > 2 (undefined at n = 1) ``` + Yes! The definition of Big-Oh has been met therefore `f(n)` is O(g(n)). *Example 2* + ``` f(n) = 3*n^2 g(n) = n @@ -97,9 +105,11 @@ g(n) = n Is `f(n)` O(g(n))? Is `3 * n^2` O(n)? Let's look at the definition of Big-Oh. + ``` 3 * n^2 <= c * n ``` + Is there some constant c that satisfies this for all n? No, there isn't. `f(n)` is NOT O(g(n)). -- cgit v1.2.3 From 60cd26e48bfae81e1588a3f54b8b5a6d733e4dc0 Mon Sep 17 00:00:00 2001 From: Jake Prather Date: Wed, 4 Mar 2015 17:48:10 -0600 Subject: Renamed Big-Oh to more prevalent notation, Big-O. --- asymptotic-notation.html.markdown | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) (limited to 'asymptotic-notation.html.markdown') diff --git a/asymptotic-notation.html.markdown b/asymptotic-notation.html.markdown index deb3e37d..e1f961f8 100644 --- a/asymptotic-notation.html.markdown +++ b/asymptotic-notation.html.markdown @@ -66,8 +66,8 @@ Polynomial - n^z, where z is some constant Exponential - a^n, where a is some constant ``` -### Big-Oh -Big-Oh, commonly written as O, is an Asymptotic Notation for the worst case, or ceiling of growth +### Big-O +Big-O, commonly written as O, is an Asymptotic Notation for the worst case, or ceiling of growth for a given function. Say `f(n)` is your algorithm runtime, and `g(n)` is an arbitrary time complexity you are trying to relate to your algorithm. `f(n)` is O(g(n)), if for any real constant c (c > 0), `f(n)` <= `c g(n)` for every input size n (n > 0). @@ -81,7 +81,7 @@ g(n) = log n Is `f(n)` O(g(n))? Is `3 log n + 100` O(log n)? -Let's look to the definition of Big-Oh. +Let's look to the definition of Big-O. ``` 3log n + 100 <= c * log n @@ -93,7 +93,7 @@ Is there some constant c that satisfies this for all n? 3log n + 100 <= 150 * log n, n > 2 (undefined at n = 1) ``` -Yes! The definition of Big-Oh has been met therefore `f(n)` is O(g(n)). +Yes! The definition of Big-O has been met therefore `f(n)` is O(g(n)). *Example 2* @@ -104,7 +104,7 @@ g(n) = n Is `f(n)` O(g(n))? Is `3 * n^2` O(n)? -Let's look at the definition of Big-Oh. +Let's look at the definition of Big-O. ``` 3 * n^2 <= c * n @@ -119,7 +119,7 @@ for a given function. `f(n)` is Ω(g(n)), if for any real constant c (c > 0), `f(n)` is >= `c g(n)` for every input size n (n > 0). -Feel free to head over to additional resources for examples on this. Big-Oh is the primary notation used +Feel free to head over to additional resources for examples on this. Big-O is the primary notation used for general algorithm time complexity. ### Ending Notes -- cgit v1.2.3