I will try to make this as insightful as possible, due to my interest in both the area of data structures in Mathematics and Computer Science.
The reason why we use trees in mathematics is for organizing data into a structured manner and to link each of the pieces of data (from now on referred to as Objects), together.
The advantage of using a tree structure is due to it’s ability of holding continuous real-world data, which can be added and deleted at any time.
In other words, strictly for scientific purposes, trees are ideal manners of organizing data in a sequential, structured manner, and at the same time allowing for the structure to grow and shrink in real-time.
There are four required steps/procedures to be taken before the “tree” can work as an efficient representation of a certain data structure:
Step One: data must be “sorted” in a certain manner.
This means, that the data may be sorted in degree of polynomials, by the number of their significant figures (floating point calculations), in order of ascending or descending numerical value (from lowest to highest or reverse order), and other.
In computer science, a strict algorithm is used as a means of maintaining the efficiency of the data structure (i.e. if it contains continuous data or any form of decimal number set where “precision matters” we want the precision of the numbers to be kept as they are once they are sorted.
Therefore, the binary search is implemented.
Function of the binary search:
Considering that the structure is already sorted (if not we need to do so)the goal of the binary search is to find a location within the “tree” called the key, and ensure, that there is space for it if something were to be overwritten on it. The binary search functions by looking through the whole length of the data structure (it finds the range between the maximum and minimum, as well as the average between the highest and lowest values and divides them by two to get the midpoint of the values, the middle component). If the value being searched for is higher than the midpoint, then it goes into to the middle of the lower quartile of the array and looks again. If the middle components's stored value was too low, it will go to the upper quartile of the array instead. The search keeps repeating this 'split the remainder' step until it finds the “key (the value you want) and if it misses it, it will simply return -1, knowing that that value that we searched for doesn't exist within the specified length of the array. On the other hand, if A → B works, within a binary search tree, according to the boolean logic that it is built upon, then if A→B then B→A must be also true, because of the “recursive” nature of the structure of the binary tree in itself. And so, the “reverse algorithmically functioning search” is the sister of the binary search, and every binary tree must be able to handle both types of searches, in order to be effectively called “a tree.”
The Recursive Binary Search will “call itself” until it runs out of places to look for the key. It takes in a data set and compares the key (a specific value we are looking to be found in this data set) to the mid element in the data structure. If the value we are looking for is greater than the mid, then we look in the upper portion of the data structure, otherwise, if the key is less than it will look in the lower quartile of the data set, and it will keep doing so until either, it is found and returns positive 1, or if it isn’t found it returns -1. The idea is the same, to keep the search to be clear & efficient, especially, since we don’t want to waste a lot of memory on computers.
When a search is done, we can move on to finding a place WHERE to place the structure for the tree to exist. This will be done by implementing modulus division.
Step Two: Modulus Division
Next, the “modulus division” or modulo operation takes place, which...