Jump to content
xisto Community

mdchurchill

Members
  • Content Count

    9
  • Joined

  • Last visited

  1. In the above, is supposed to be read b ) ... damn smilies
  2. The Lambda Calculus1. IntroductionIs a fascinating “computer language” that is very simple, but it is possible to create any program within it you could possibly do on any computer ever. It also seems to me to be the most obvious fundamental bridge between mathematics and computer science – and thus as a Mathematics and Computer Science undergraduate, I find it very interesting.We start off with a set of variables, V. This can be anything – things like x, y, z, a, b, c, etc. This set is considered infinite (although countable, which is very important for proving undecidability results about the lambda calculus.) We then have terms made up of variables, declaring functions, and using functions. These are of the following forms:V := x,y,z,a,b,c…T := V | T T | \V.THere read \x.T as “the function that takes in x as an input and returns T as an output.” For example, \x.x is the identity function that returns it’s input, \xy.y takes in two values and returns the second value, etc. (\xy.y is an abbreviation for \x.(\y.y) and represents a technique called currying, where we represent a function of two arguments as a function that takes an argument and returns a function, which then takes the second argument and returns an answer.) Another example is the function \f.fx which takes in a function, and returns the value of f(x). In the lambda calculus functional application is denoted by simple concatenation. For example, (\x.x) t = t – this is the identity function applied to the variable t, which returns the variable t. Another example would be (\f.ft) (\xy.x) = (\xy.x) t = \y.tNotice that functional application is basically substitution – the result of (\x.T) applied to y is T with any occurrence of y replaced by x – we write T[x/y]. Thus the “execution” of the lambda calculus essentially comes down to one rule known as “beta-conversion”\x.T) y = T[x/y]…only it’s not quite that simple. Consider (\f.fy) y. This executes (or “reduces”) to (\y.yy) = … Hence we need alpha-conversation: we need to rename the bound variables (variables that are “holes” representing the input to some function) to other fresh variables that aren’t used elsewhere. For example, before we start reducing (\x.xy) (\x.xy) we would need to go through the expression and rename the second x to a new variable, e.g. the z (\x.xy) (\z.zy).Thus we have our “algorithm” for running “programs” in the lambda calculus – rename all of the bound variables into new, fresh variables using alpha-conversion (and we can do this as there are infinitely many variables) and then reduce, using the beta-reduction rule given above. Alternatively, we can do alpha-conversion as we go along through the path of beta-reduction, which is in fact sometimes necessary (see below)2. Reduction StrategiesConsider (\xy.x) t ((\x.xx)(\x.xx)). We apply alpha-conversion to this, to get (\xy.x) t ((\z.zz)(\v.vv)). Now we have a choice – there are two possible choices to beta reduction. If we reduce the first, we get (\y.t) ((\z.zz)(\v.vv)), which (again taking the left most reduction) reduces to t as (\y.t) ignores its argument. However, if we decided to reduce the (\z.zz)(\v.vv) term first, we then get (\v.vv) (\v.vv) which alpha-converts to (\z.zz) (\v.vv) –where we started. We thus get into an infinite loop. Thus the order we decide to reduce terms is important – in the former case we get an answer, t, in the latter case we keep going on for ever. So which one should we take? In this case it appears to be to reduce the leftmost (i.e. outermost) reduct – but is this true in the general case? Taking the outermost reduct is like evaluating (3+4)^2 to (3+4).(3+4) to (7).(3+4) to (7).(7) to 49, whilst doing the 3+4 first takes less time as we only need to do it once. There is an even more important question though – if taking different reduction paths can lead to an answer, or not, is it possible that two reduction paths can lead to different answers? Fortunately, we have the following two theorems: The Church-Rosser Theorem: Suppose a reduces to b and a reduces to c, both of which cannot be reduced further. Then b and c are the same term (up to alpha conversion.) The Leftmost Reduction Theorem: Suppose a reduces to b where b cannot be reduced further. Then a will reduce to b if you follow a leftmost reduction strategy, always reducing the outermost lambda you see.From the Church-Rosser Theorem, we can show that Lambda Calculus is consistent, i.e. it is not the case that all terms are the same. From the Leftmost Reduction Theorem, we can show that if in the example above we have a non-terminating term, but it gets to a solution, then that term was never used and can thus be replaced with any other term and you still get to the same answer.Implementations of beta-reduction often use “graph reduction” rather than straightforward beta reduction – in reducing (3+4)^2 will reduce to %*% where % = 3+4 – it would then reduce the 3+4 and insert it in both places the % was found. This way we only have to do things once, and still guaranteeing us an answer, apparently the optimum reduction strategy to use. 3. Doing Useful ThingsWell that’s all well and good, but I thought “lambda calculus could do anything any computer could possibly do ever?” I hear you ask. Well, here are some examples.The lambda calculus just has functions, nothing else. For a reasonable programming language, we need conditionals, booleans, looping, numbers, lists, etc, which are clearly not available in the lambda calculus. Well, they kind of are.Ok, your first demand was conditionals. Encodetrue := \xy.xfalse := \xy.y if := \abc.abc.Then if true a b = true a b = a and if false a b = false a b = b (note there isn’t really any need for if here at all, we could just write the second element in these strings of equals.) We can also have, for exampleand := \b. if b c falseor = \b.if b true cnot := \b.if b false true.These are all reasonable intuitive. Secondly, we have numbers. Encode0 := \fx.x1 := \fx.fx2 := \fx.f(fx)3 := \fx.f(f(fx)), and so on (so n contains n applications of f.)Then we can definesucc := \nfx.f(nfx)zero? := \n.n(\x.false)true (zero? returns true if its argument is zero,)pred := \n. if (zero? n) 0 (n (\y.y i (succ y)) (\ab.0)) where i is (am?) the identity, i := \x.x.eq := \ab. if (zero? m) (zero? n 0 1) (zero? n 1 (eq (pred m) (pred n))Some of these seem intuitive from the above definitions (succ, zero?, some need a little more unwinding such as eq and the terms below, and some are reasonable incomprehensible such as pred. To see pred, one may take the special case of zero out and prove by induction that this works on all values other than zero, then add in the special case.)We would like to say thatadd := \ab. if (zero? a) b (succ(add (pred a) which corresponds to a+b := (a-1)+b + 1. Sim,mult := \ab. if (eq a 1) b (add(mult (pred a) :Dsub := \ab. if (zero? a (pred(sub a (pred ) which corresponds to a+b := (a-1)+b + 1. Sim,div := \ab. if (eq a 1) b (add(mult (pred a) :PHowever, we cannot do so as the lambda calculus doesn’t allow recursion (remember, these definitions are just abbreviations, and we can’t have recursive abbreviations unless we’re unix programmers.) However, we can directly simulate recursion. The key observation is noticing the presence of the fixed point operator theta:theta := (\xy.y(xxy)) (\xy.y(xxy))Note that this is somehow paradoxical, as we are applying a function to itself. This is what makes the lambda calculus so powerful (the fact that we can do this kind of thing) and also sometimes amusingly obscure :-). Then theta has the nice property that theta f = f(theta f). So for example, if we have a formula f = … f … we can replace this for f = theta(\s. …s…) and this allows us to simulate recursion in the lambda calculus. Thus, in the examples aboveadd := theta(\fab. if (zero? a) b (succ(f (pred a) )And so on. Finally, we can also represent data structures within the lambda calculus. For example, list structures:cons := \abf.fabhead := \c.c(\ab.:)tail := \c.c(\ab.:)nil := \f.truenil? := \x.x(\ht.false)The recurring theme here is that we are simulating other objects by functions that behave in a certain way that we would like when used with other functions (e.g. above head(cons p q) = p, etc.)4. Computability and DecidabilityThis section is mathematics, and is heavier going than previous sections.Roughly speaking, a recursive function is a partial function mapping N^k -> N made up from fundamental building blocks of constant functions, identities, projections, that’s closed under composition and minimalisation. Using the above computations, it can be shown that any recursive function can be defined in the lambda calculus, that is:If f : N^k -> N is a recursive function then exists a lambda term g such that <b>f(x1,…xk)</b> = <b>g x1 … xk</b>Here the boldface represents the term representation of the number.This is done just by construction using the definitions above.Church’s Thesis states that the recursive functions are precisely the computable functions (i.e. functions that can be computed by a Turing machine, a superset of those computable by any computer (as a Turing machine has a potentially infinite memory.) And any lambda calculus function mapping N to N is thus recursive, as we can write down an algorithm to evaluate it – namely leftmost reduction.This shows that we can do anything on lambda calculus that we can on a Turing machine, i.e. we can do anything in lambda calculus that can done by any computer at all.Now we look at decidability. The set of terms T is countable. Thus, there is a bijection # : T -> N. Furthermore, by constructing a suitable example, we can see that the bijection # is computable. Define “” : T -> T by “t” = #t. Thus “” takes a term, and maps it to the lambda calculus representation of the number of that term. The second recursion theorem states thatFor all terms f, there exists a term u s.t. f “u” = uand it can be proved using the fact that any computable function is lambda-definable. From this we can prove the Scott-Curry theorem, which is a generalisation of the fact that there is no algorithm to determine whether, given any two terms in the lambda calculus, can return whether they are equal or not (note this result is in general: it may be possible in special cases, e.g. the reduction described above, but there is no algorithm that can be given two arbitrary terms and always return a definite value of whether they are equal or not, always terminating. Note that the equals referred to in this theorem is symmetric – before I have been using an antisymmetric = where things reduce from left to right, while here the symmetric closure of that is taken – a = b iff a reduces to b or b reduces to a.)5. “Compiling” the lambda calculusSurprisingly, there is something even simpler than the lambda calculus that the lambda calculus can be translated to, and then this simpler substance can be interpreted with even simpler ease. This is known as combinatory logic – and we can compile lambda calculus to combinatory logic, and run a CL interpreter that’s ridiculously simple as there are no abstractions (and thus no naming problems so there is no need for alpha-conversion.) I shall perhaps come back and expand this section.This introduction has described the untyped lambda calculus, where there are no types and all terms are equal. The extension is the typed lambda calculus, but is not as simple as the untyped lambda calculus.In conclusion, lambda calculus is a very powerful system that is itself very simple and can be compiled into a very simple form.I shall probably return and improve this at some point.
  3. Functional programming is indeed very nice. Also, it's now becoming a lot more useful with the introduction of F#, my current language of choice, that allows one to do functional programming (basically ocaml) in the .NET framework allowing you to link it up with everything else (including, for example, P#, a prolog interpreter for the .NET framework, allowing you do to predicate logic inference from a functional programming language :-) Also, if you'd like to experiment to see what functional programming's about in a fun environment, I refer you to http://www.cs.ox.ac.uk/geomlab/
  4. Buying Visual Studio .NET seems silly when you can program in .NET and use all its amazing features and all of the documentation for free, and if you want an IDE SharpDevelop is great (just google it or something) and it's freeware, lightweight and open source so yayy :-)
  5. It basically uses a breadth first search of the maze, while removing backwards paths if its a tree. We represent a place (square) in the maze by its x and y coordinates: > type Place = (Int, Int) A direction, corresponding to a compass point > data Direction = N | S | E | W Geographic functions opposite and move will be useful throughout the course of investigating the mazes. Their definitions are given below: > opposite :: Direction -> Direction > opposite N = S > opposite S = N > opposite E = W > opposite W = E > move :: Direction -> Place -> Place > move N (i,j) = (i,j+1) > move E (i,j) = (i+1,j) > move S (i,j) = (i,j-1) > move W (i,j) = (i-1,j) We'll represent a piece of wall by a place and a direction. For example, ((3,4), N) means that there is a wall to the North of (3,4), and to the South of (3,5). > type Wall = (Place, Direction) A size of (x,y) represents that the squares are numbered (i,j) for i <- [0..x) and j <- [0..y). > type Size = (Int, Int) Here we shall define the given initial representation of mazes. Note that this module is independent and that the actual representation of the mazes is not used in the solving or drawing of mazes – the ‘interface’ of this module with the outside world contains three functions – makeMaze, which takes a list of walls and produces a maze; hasWall, which returns whether there is a wall in a specific place and direction; and sizeOf, returning an ordered pair representing the size of the maze. This is reflected in the module definition: We'll represent a maze by its size and a list of its walls. > data Maze = AMaze Size [Wall] The list of walls will be complete in the sense that we record both sides of the wall; for example, if the list includes ((3,4), N), then it will also include ((3,5),S). This function creates a maze given its size and a list of walls; the list of walls might not be complete in the above sense. > makeMaze :: Size -> [Wall] -> Maze > makeMaze (x,y) walls = > let boundaries = -- the four boundaries > [((0,j), W) | j <- [0..y-1]] ++ -- westerly boundary > [((x-1,j), E) | j <- [0..y-1]] ++ -- easterly boundary > [((i,0), S) | i <- [0..x-1]] ++ -- southerly boundary > [((i,y-1), N) | i <- [0..x-1]] -- northerly boundary > allWalls = walls ++ boundaries ++ map reflect (walls ++ boundaries) > in AMaze (x,y) allWalls The following function "reflects" a wall; i.e. gives the representation as seen from the other side; for example, reflect ((3,4), N) = ((3,5),S) > reflect :: Wall -> Wall > reflect ((i,j), d) = (move d (i,j), opposite d) The following function tests whether the maze includes a wall in a particular direction from a particular place: > hasWall :: Maze -> Place -> Direction -> Bool > hasWall (AMaze _ walls) pos d = (pos,d) `elem` walls The following function returns the size of a maze: > sizeOf :: Maze -> Size > sizeOf (AMaze size _) = size We shall now look at a functional algorithm to solve a given maze. The type returned will be a path: > type Path = [Direction] So solveMaze takes a maze, a starting position, an end position, and returns a path between those two points. solveMaze shall make use of a subsidiary function, solveMazeIter. solveMazeIter works by maintaining a list of partial solutions – a partial solution being a place in the maze coupled with a path that can be used to get to that position in the maze from the start. The initial list shall merely consist of a singleton partial solution – namely the start position and the empty list – i.e. the start position and the path to get from the start position to the start position. solveMazeIter then processes each element of the list (i.e. each partial solution) by replacing it with further partial solutions. It shall examine each element of the list and add all partial solutions that can be reached from that position to the end of the list. Once all entries of the list have been examined, either the target would have been found or the maze is impossible. This seems a very elegant, linear way to search for a solution to the problem and will hence be the one I use in solving mazes. So our subsidiary function, solveMazeIter is to take a maze, the target location, and a list of partial solutions, and return a path (when one is found.) > solveMaze :: Maze -> Place -> Place -> Path > solveMaze maze start target = solveMazeIter maze target [(start, [])] > solveMazeIter :: Maze -> Place -> [(Place, Path)] -> Path > solveMazeIter maze target ((here,path):partials) > | here == target = path > | otherwise = solveMazeIter maze target (partials++expOut) > where expOut = (if noWall N then [(move N here, path++[N])] else []) ++ > (if noWall E then [(move E here, path++[E])] else []) ++ > (if noWall S then [(move S here, path++)] else []) ++ > (if noWall W then [(move W here, path++[W])] else []) > noWall d = if (hasWall maze here d) then False else True Here expOut returns the list of locations accessible from the position being examined. The noWall subfunction has been used for elegance of layout. solveMaze can successfully solve the first maze, and gives an output of Main> solveMaze smallMaze (0,0) (3,2) [E,N,E,S,E,N,N] (44498 reductions, 73023 cells) Main> solveMaze largeMaze (0,0) (22,21) (59830457 reductions, 93059917 cells, 2526 garbage collections) ERROR - Garbage collection fails to reclaim sufficient space We can see that this solution works by inspection. However, solveMaze fails to solve the largeMaze in any reasonable amount of time, and in fact cannot be handled by the Hugs interpreter. The program above, therefore, is not efficient enough. We then increase the efficiency by having a subsidary list of places we’ve already visited, and not visiting them again (as they would be longer than any path we already have. Complexity Analysis: The maximum number of times solveMazeIter is called is the number of squares of the maze, i.e. m*n. For each of these iterations, the hasWall function is called four times, each of which in turn requires a linear searching of the list of walls, of which there can be a maximum of 2*m*n. Therefore the order of this function over a maze of size m*n is O((m*n)^2)
  6. Quicksort is average O(log N) time, but at worst O(N^2) if the data is annoyingly constructed. There are algorithms that run in more consistent log time (e.g. mergesort, heapsort) but there algorithms are much simpler. Well definately in the second case. In the former case (in Haskell notation which is so beautiful I AM introducing these forums to the lovely world of FP):Merge (x:xs) (y:ys) = if (x < y) then x:Merge(xs, y:ys) else Merge [] ys = ysMerge xs [] = xsMergeSort xs = Merge (Mergesort ys) (MergeSort zs) where ys = odds xs zs = evens xsodds (x:y:xs) = x:odds(xs)odds [x] = xodds [] = []evens (x:y:xs) = y:evens(ys)evens [y] = []evens [] = []Where here x:xs represents x at the front of a list of xs, [] the empty list and [] the singleton list.(In other words - to sort a list [3,2,4,1] we split into odds and evens - [3,4] and [2,1], then sort these recursively into [3,4] and [1,2] and then Merge them together into [1,2,3,4]. Merge takes to (ordered) lists and produces an (ordered) list by looking at the first element of both (see above.))
  7. I disagree. I love the infrastructure and architecture of DotNet, and in particular C#. Let me explain: The .NET framework is precisely thus: a framework. It provides a frame of methods and classes and namespaces to do things that one would like to do - run GUIs, internet, data processing, lots of collection and utility classes etc. It also provides a language in which to use them, a very object-orientated set of semantics. The various different languages, C#, .NET, C++.NET, Delphi.NET (produced by Borland, hence proving this isn't an entirely microsoft thing) all then can use this framework and is compiled to .NET bytecode. Microsoft then provides an interpreter for this bytecode for Windows (but there is also in production an implementation of the bytecode interpreter for Linux, hence once this is done .NET bytecode can be run on Linux, the same bytecode that would work on Windows.) On one level, the .NET framework provides a nice way of abstracting things off: You have the bytecode, produced in any language you wish, that can then run on many platforms - both platforms have a GUI implementation, that has the same behaviour and properties so you can use code for both systems. Thus, the entire .NET system is object-orientated itself, which is how things should be done in a modern world and make sense. Another thing I love is the actual power of the framework from the object orientated point of view: the notation is very nice and it allows you to do everything, <i>full</i> object-orientated power (in a way unlike any previous microsoft language,) very consistent notation not treating system things specially to anything you would do, and has some quite outstanding features compared to Java or VB.Net, particularly in Framework Version 2.0 including generics (polymorphic data types, for those from the Functional Programming world,) Anonymous functions (Yayy! LAmbda abstractions in Real Life not just in Academia!) and even co-routines (or at least a beautiful iterator-translation implementation of them.) The .NET framework and in particular C# are now my language of choice for most general tasks, and I have used it in industry last summer writing a generic object-orientated machine simulator.
  8. A space elevator is a concept that has been in science-fiction for a long time, but is a potential way to give us the ability to travel into space. The concept goes something like this:In space travel, the energy is most needed to leave the Earth, to escape the gravitational well of the Earth. Once one has got into the atmosphere, it is trivial (well, maybe not) to get elsewhere. The idea of a space elevator is to create an 'elevator' into space stations in the atmosphere. This perhaps indeed sounds silly, but the payoff is that the energy to take something up into the spaceshuttle is paid for by the energy of something else (moon rock?) coming down the other side of a pulley system. It is difficult to build, but once built can be used cheaply as its net energy cost is minimal. We can then use this to get tools and equipment into the space stations, where we can build our space ship.There are lots of technical difficulties to overcome, e.g. what materials to use, the base of the elevator having to be huge etc, but we are on our way to overcoming these as technology moves forward. It could be the first move into space...
  9. This is the case with the Net MD players such as your model, the new Hi-MD models do not have this restriction and you can indeed record analoguely onto the minidisc and then digitally transfer this to the PC via the USB cable.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.