Follow by Email

Follow by Email

Search This Blog

Sunday, 30 August 2015

Games to be featured on our blog!!!





How to nominate games to be featured on our blog!!!

RATE THIS

Wednesday, 26 August 2015

Cellular network


Montage of four professional US omnidirectional base-station antennas



A cellular network or mobile network is a communications network where the last link is wireless. The network is distributed over

land areas called cells, each served by at least one fixed-location transceiver, known as a cell site or base station. In a cellular

network, each cell uses a different set of frequencies from neighboring cells, to avoid interference and provide guaranteed bandwidth

within each cell.

When joined together these cells provide radio coverage over a wide geographic area. This enables a large number of portable

transceivers (e.g., mobile phones, pagers, etc.) to communicate with each other and with fixed transceivers and telephones anywhere

in the network, via base stations, even if some of the transceivers are moving through more than one cell during transmission.

Cellular networks offer a number of desirable features:

More capacity than a single large transmitter, since the same frequency can be used for multiple links as long as they are in

different cells
Mobile devices use less power than with a single transmitter or satellite since the cell towers are closer
Larger coverage area than a single terrestrial transmitter, since additional cell towers can be added indefinitely and are not

limited by the horizon
Major telecommunications providers have deployed voice and data cellular networks over most of the inhabited land area of the Earth.

This allows mobile phones and mobile computing devices to be connected to the public switched telephone network and public Internet.

Private cellular networks can be used for research or for large organizations and fleets, such as dispatch for local public safety agencies or a taxicab company.

https://en.wikipedia.org/wiki/Cellular_network

cache memory definition



Cache memory, also called CPU memory, is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. Thismemory is typically integrated directly with the CPU chip or placed on a separate chip that has a separate bus interconnect with the CPU.
Cache memory is fast and expensive. Traditionally, it is categorized as "levels" that describe its closeness and accessibility to the microprocessor:
  • Level 1 (L1) cache is extremely fast but relatively small, and is usually embedded in the processor chip (CPU).
  • Level 2 (L2) cache is often more capacious than L1; it may be located on the CPU or on a separate chip or coprocessor with a high-speed alternative system bus interconnecting the cache to the CPU, so as not to be slowed by traffic on the main system bus.
  • Level 3 (L3) cache is typically specialized memory that works to improve the performance of L1 and L2. It can be significantly slower than L1 or L2, but is usually double the speed of RAM. In the case of multicore processors, each core may have its own dedicated L1 and L2 cache, but share a common L3 cache. When an instruction is referenced in the L3 cache, it is typically elevated to a higher tier cache.

http://searchstorage.techtarget.com/definition/cache-memory




Tuesday, 25 August 2015

Wireless connectivity








Wireless capability is a key requirement for most enterprise mobility applications, and it has been reported that wireless-transmission failure rates are three times higher for non-rugged notebooks compared to rugged units. This difference is attributed to the greater experience of rugged-notebook vendors at integrating multiple radios into their products. Each transmission failure leads to five to ten minutes in lost productivity as the user has to re-login to the company network through a VPN.
Since enterprises are turning to cellular networks to enable full-time connectivity for their users, major vendors of rugged computers offer both built-in wireless LAN andwireless WAN capabilities, and partner with cellular carriers, such as Verizon and AT&T, as part of their offerings.[12][13] During the handoff between the various wireless LAN and wireless WAN connections, a mobile VPN allows the connection to persist, creating an always-connected infrastructure that is simpler for the user and eliminates application crashes and data loss.

https://en.wikipedia.org/wiki/Rugged_computer

Saturday, 22 August 2015

Rugged computer






                                                         



rugged (or ruggedized, but also ruggedisedcomputer is a computer specifically designed to operate reliably in harsh usage environments and conditions, such as strong vibrations, extreme temperatures and wet or dusty conditions. They are designed from inception for the type of rough use typified by these conditions, not just in the external housing but in the internal components and cooling arrangements as well. In general, ruggedized and hardened computers share the same design robustness and frequently these terms are interchangeable.
Typical end-user environments for rugged laptopstablet PCs and PDAs are public safety, field sales, field service,manufacturingretailhealthcaretransportation/distribution and the military. They are used in the agricultural industries and by individuals for recreational activities, as well, such as hunting or geocaching.

https://en.wikipedia.org/wiki/Rugged_computer








Friday, 14 August 2015

Big O notation






Example of Big O : f(x) ∈ O(g(x)) as there exists c > 0 (e.g.,c = 1) and x0 notation(e.g., x0 = 5) such that f(x) < cg(x) whenever x > x0.

In mathematics, big O notation describes the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions. It is a member of a larger family of notations that is called Landau notation, Bachmann–Landau notation (after Edmund Landau and Paul Bachmann), or asymptotic notation. In computer science, big O notation is used to classify algorithmsby how they respond (e.g., in their processing time or working space requirements) to changes in input size. In analytic number theory, it is used to estimate the "error committed" while replacing the asymptotic size, or asymptotic mean size, of an arithmetical function, by the value, or mean value, it takes at a large finite argument. A famous example is the problem of estimating the remainder term in the prime number theorem.
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation. The letter O is used because the growth rate of a function is also referred to as order of the function. A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, and Θ, to describe other kinds of bounds on asymptotic growth rates.
Big O notation is also used in many other fields to provide similar estimates. 

https://en.wikipedia.org/wiki/Big_O_notation

Wednesday, 12 August 2015

DSPACE


Image result for https://en.wikipedia.org/wiki/DSPACE



Image result for https://en.wikipedia.org/wiki/DSPACE


In computational complexity theoryDSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a given algorithm. It is one of the most well-studied complexity measures, because it corresponds so closely to an important real-world resource: the amount of physical computer memory needed to run a given program.
The measure DSPACE is used to define complexity classes, sets of all of the decision problems which can be solved using a certain amount of memory space. For each functionf(n), there is a complexity class SPACE(f(n)), the set of decision problems which can be solved by a deterministic Turing machine using space O(f(n)). There is no restriction on the amount of computation time which can be used, though there may be restrictions on some other complexity measures (like alternation).
Several important complexity classes are defined in terms of DSPACE. These include:
  • REG = DSPACE(O(1)), where REG is the class of regular languages. In fact, REG = DSPACE(o(log log n)) (that is, Ω(log log n) space is required to recognize any non-regular language).[1] [2]
Proof: Suppose that there exists a non-regular language L ∈ DSPACE(s(n)), for s(n) = o(log log n). Let M be a Turing machine deciding L in space s(n). By our assumption M ∉DSPACE(O(1)); thus, for any arbitrary k ∈ \mathbb{N}, there exists an input of M requiring more space than k.
Let x be an input of smallest size, denoted by n, that requires more space than k, and \mathcal{C} be the set of all configurations of M on input x. Because M ∈ DSPACE(s(n)), then |\mathcal{C}| \le 2^{c.s(n)} = o(log n), where c is a constant depending on M.
Let S denote the set of all possible crossing sequences of M on x. Note that the length of a crossing sequence of M on x is at most |\mathcal{C}|: if it is longer than that, then some configuration will repeat, and M will go into an infinite loop. There are also at most |\mathcal{C}| possibilities for every element of a crossing sequence, so the number of different crossing sequences of M on x is
|S|\le|\mathcal{C}|^{|\mathcal{C}|} \le (2^{c.s(n)})^{2^{c.s(n)}}= 2^{c.s(n).2^{c.s(n)}}< 2^{2^{2c.s(n)}}=2^{2^{o(\log \log n)}} = o(n)

See more  https://en.wikipedia.org/wiki/DSPACE


Tuesday, 11 August 2015

Worst-case complexity





In computer science, the worst-case complexity (usually denoted in asymptotic notation) measures the resources (e.g. running time, memory) an algorithm requires in the worst-case. It gives an upper bound on the resources required by the algorithm.
In the case of running time, the worst-case time-complexity indicates the longest running time performed by an algorithm given any input of size n, and thus this guarantees that the algorithm finishes on time. Moreover, the order of growth of the worst-case complexity is used to compare the efficiency of two algorithms.
The worst-case complexity of an algorithm should be contrasted with its average-case complexity, which is an average measure of the amount of resources the algorithm uses on a random input.
Given a model of computation and an algorithm A that halts on each input x, the mapping tA:{0, 1}*→N is called the time complexity of A if, for every xA halts after exactlytA(x) steps.
Since we usually are interested in the dependence of the time complexity on different input length, abusing terminology, the time complexity is sometimes referred to the mapping TA:NN, defined by TA(n):= maxx∈{0, 1}n{tA(x)}.
Similar definitions can be given for space complexity, randomness complexity, etc.

Consider performing insertion sort on n numbers on a random access machine. The best-case for the algorithm is when the numbers are already sorted, which takes O(n) steps to perform the task. However, the input in the worst-case for the algorithm is when the numbers are reverse sorted and it takes O(n2) steps to sort them; therefore the worst-case time-complexity of insertion sort is of O(n2).


https://en.wikipedia.org/wiki/Worst-case_complexity





Sunday, 9 August 2015

Time complexity of an algorithm




In computer science, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input[1]:226. The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms. When expressed this way, the time complexity is said to be described asymptotically, i.e., as the input size goes to infinity. For example, if the time required by an algorithm on all inputs of size n is at most5n3 + 3n for any n (bigger than some n0), the asymptotic time complexity is O(n3).
Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, where an elementary operation takes a fixed amount of time to perform. Thus the amount of time taken and the number of elementary operations performed by the algorithm differ by at most a constant factor.
Since an algorithm's performance time may vary with different inputs of the same size, one commonly uses the worst-case time complexity of an algorithm, denoted as T(n), which is defined as the maximum amount of time taken on any input of size n. Less common, and usually specified explicitly, is the measure of average-case complexity. Time complexities are classified by the nature of the function T(n). For instance, an algorithm with T(n) = O(n) is called a linear time algorithm, and an algorithm with T(n) = O(Mn) andmn= O(T(n)) for some M ≥ m > 1 is said to be an exponential time algorithm.
https://en.wikipedia.org/wiki/Time_complexity#Polynomial_time

Saturday, 8 August 2015

Compilador (Informàtica)






Un compilador es un programa informático que traduce un programa escrito en unlenguaje de programación a otro lenguaje de programación. Usualmente el segundo lenguaje es lenguaje de máquina, pero también puede ser un código intermedio (bytecode), o simplemente texto. Este proceso de traducción se conoce como compilación.

La construcción de un compilador involucra la división del proceso en una serie de fases que variará con su complejidad. Generalmente estas fases se agrupan en dos tareas: el análisis del programa fuente y la síntesis del programa objeto.
  • Análisis: Se trata de la comprobación de la corrección del programa fuente, e incluye las fases correspondientes al Análisis léxico(que consiste en la descomposición del programa fuente en componentes léxicos), Análisis sintáctico (agrupación de los componentes léxicos en frases gramaticales ) y Análisis semántico (comprobación de la validez semántica de las sentencias aceptadas en la fase de Análisis Sintáctico).
  • Síntesis: Su objetivo es la generación de la salida expresada en el lenguaje objeto y suele estar formado por una o varias combinaciones de fases de Generación de Código (normalmente se trata de código intermedio o de código objeto) y de Optimización de Código (en las que se busca obtener un código lo más eficiente posible).
Alternativamente, las fases descritas para las tareas de análisis y síntesis se pueden agrupar en Front-end y Back-end:
  • Front-end: es la parte que analiza el código fuente, comprueba su validez, genera el árbol de derivación y rellena los valores de latabla de símbolos. Esta parte suele ser independiente de la plataforma o sistema para el cual se vaya a compilar, y está compuesta por las fases comprendidas entre el Análisis Léxico y la Generación de Código Intermedio.
  • Back-end: es la parte que genera el código máquina, específico de una plataforma, a partir de los resultados de la fase de análisis, realizada por el Front End.


https://es.wikipedia.org/wiki/Compilador

















Friday, 7 August 2015

Teoría de autòmatas



Teoría de autòmatas es el estudio de las máquinas y autómatas abstractos, así como los problemas de cálculo que pueden ser resueltos usando ellos. Es una teoría en ciencias de la computación teórica, bajo la matemática discreta (una sección de Matemáticas y también de Informática). Autómatas viene de la palabra griega que significa αὐτόματα "auto-actuación".
La figura de la derecha muestra una máquina de estados finitos, que pertenece a una variedad bien conocido de autómata. Este autómata consiste en estados (representados en la figura por círculos) y transiciones (representadas por flechas). A medida que el autómata ve un símbolo de la entrada, tiene una transición (o salto) a otro estado, de acuerdo con su función de transición (que toma el estado actual y el símbolo reciente como entradas).
Teoría de Autómatas también está estrechamente relacionada con la teoría del lenguaje formal. Un autómata es una representación finita de un lenguaje formal que puede ser un conjunto infinito. Autómatas se clasifican a menudo por la clase de lenguajes formales que son capaces de reconocer.
Autómatas juegan un papel importante en la teoría de la computación, diseño del compilador, la inteligencia artificial, el análisis y la verificación formal.
Autómatas se definen para estudiar máquinas útiles bajo el formalismo matemático. Por lo tanto, la definición de un autómata está abierta a variaciones según la "máquina del mundo real", lo que queremos modelar utilizando el autómata. La gente ha estudiado muchas variaciones de autómatas. La variante más estándar, que se describe más arriba, se llama un autómata finito determinista. Las siguientes son algunas variaciones populares en la definición de los diferentes componentes de los autómatas.

Automata theory















Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science, under discrete mathematics (a section of Mathematics and also of Computer Science). Automata comes from the Greek word αὐτόματα meaning "self-acting".
The figure at right illustrates a finite state machine, which belongs to one well-known variety of automaton. This automaton consists of states (represented in the figure by circles), and transitions (represented by arrows). As the automaton sees a symbol of input, it makes a transition (or jump) to another state, according to its transition function (which takes the current state and the recent symbol as its inputs).
Automata theory is also closely related to formal language theory. An automaton is a finite representation of a formal language that may be an infinite set. Automata are often classified by the class of formal languages they are able to recognize.
Automata are defined to study useful machines under mathematical formalism. So, the definition of an automaton is open to variations according to the "real world machine", which we want to model using the automaton. People have studied many variations of automata. The most standard variant, which is described above, is called a deterministic finite automaton. The following are some popular variations in the definition of different components of automata.

https://en.wikipedia.org/wiki/Automata_theory

Análisis de algoritmos








En informática, el análisis de algoritmos es la determinación de la cantidad de recursos (como el tiempo y el almacenamiento) necesarios para ejecutarlos. La mayoría de los algoritmos están diseñados para trabajar con entradas de longitud arbitraria. Por lo general, la eficacia o el tiempo de funcionamiento de un algoritmo se expresa como una función que relaciona la longitud de entrada para el número de pasos (complejidad de tiempo) o lugares de almacenamiento (complejidad espacio).
Algoritmo de análisis es una parte importante de una teoría de la complejidad computacional más amplio, que proporciona estimaciones teóricas para los recursos necesarios por cualquier algoritmo que resuelve un problema computacional dado. Estas estimaciones dan una idea de las instrucciones razonables de búsqueda de algoritmos eficientes.
En el análisis teórico de los algoritmos es común para estimar su complejidad en el sentido asintótico, es decir, para estimar la función de la complejidad para arbitrariamente grande de entrada. Cota superior asintótica, notación omega-Grande y la notación Big-theta se utilizan para este fin. Por ejemplo, se dice que la búsqueda binaria para funcionar en una serie de pasos proporcional al logaritmo de la longitud de la lista que se busca, o en O (log (n)), coloquialmente "en el tiempo logarítmica". Por lo general, las estimaciones asintóticas se utilizan debido a que diferentes implementaciones del mismo algoritmo pueden diferir en la eficiencia. Sin embargo, las eficiencias de las dos implementaciones "razonables" de un algoritmo dado están relacionadas por un factor multiplicativo constante llamada Ahidden constante.
Medidas exactas (no asintóticas) de eficiencia a veces pueden ser calculadas pero por lo general requieren ciertas suposiciones con respecto a la implementación particular del algoritmo, llamado modelo de computación. Un modelo de cálculo puede ser definido en términos de un ordenador abstracto, por ejemplo, máquina de Turing, y / o postulando que ciertas operaciones se ejecutan en la unidad de tiempo. Por ejemplo, si la lista ordenada a la que aplicará búsqueda binaria tiene n elementos, y podemos garantizar que cada búsqueda de un elemento en la lista se puede realizar en la unidad de tiempo, a continuación, a lo sumo log2 n se necesitan + 1 unidades de tiempo para volver una respuesta.

https://en.wikipedia.org/wiki/Analysis_of_algorithms

Analysis of algorithms





In computer science, the analysis of algorithms is the determination of the amount of resources (such as time and storage) necessary to execute them. Most algorithms are designed to work with inputs of arbitrary length. Usually, the efficiency or running time of an algorithm is stated as a function relating the input length to the number of steps (time complexity) or storage locations (space complexity).
Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms.
In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notationBig-omega notation and Big-theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the length of the list being searched, or in O(log(n)), colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called ahidden constant.
Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called model of computation. A model of computation may be defined in terms of an abstract computer, e.g., Turing machine, and/or by postulating that certain operations are executed in unit time. For example, if the sorted list to which we apply binary search has n elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log2 n + 1 time units are needed to return an answer.
https://en.wikipedia.org/wiki/Analysis_of_algorithms