Jump to content
xisto Community

jkk

Members
  • Content Count

    7
  • Joined

  • Last visited

  1. guesthouse, you're absolutely right - the key words are "specific task". Marvin Minsky (AI pioneer & occasional sci-fi author) once said "Some tasks we thought would be very hard turned out to be easy, or at least manageable; there are machines that play chess very well, and there's a machine that became world champion at backgammon, if anyone cares,. But some tasks that we thought would be very easy turned out to be very hard. How does a child learn to tie her shoelaces? How do you teach that to a machine?" If you think it's easy to teach a child to tie shoelaces, tell me how you'd command a robot to do it. If that's too hard, try an easier task: tell a robot what is necessary to boil and egg for your breakfast. Post your results to this thread. Adaptability to the vagaries of the real world is the problem. If you could give a machine the equivalent general knowledge to a human, it might manage some of these general/adaptable tasks. The CYC project (http://www.cycorp.com/) has been working on this for a long time, and they've had some successes - but mostly in specialised domains, rather than truly "general" knowledge.It reminds me of old joke about two Daleks who roll up to a flight of stairs, and one says "Well, that b*****s up our plans to conquer the universe!"
  2. Here'sa good website: One City Challenge Strategy Guide http://forums.xisto.com/no_longer_exists/
  3. Check out some of the latest research on wearable technology http://forums.xisto.com/no_longer_exists/ recommend clicking on "In the news" or "Technology"
  4. http://forums.xisto.com/no_longer_exists/ have some. So do http://forums.xisto.com/no_longer_exists/. Actually, typing "certified ethical hacker training" into Google produces quite a long list. And there's a bunch who have set themselves up as certifiers: http://forums.xisto.com/no_longer_exists/ I'm interetsed to know how widely their qualification is accepted/recognised.
  5. I can't believe the 'Search' button returns no hits for good ol' Civilization. Especially as FreeCiv 2.0 Beta is now out. FreeCiv 2 has some interesting variations, notably that the you're likely to finish the research tree well before the end of the game. Well, if they will cut the price of the Colossus in half ... And if you start a spaceship, everyone declares war on you immediately, which can be entertaining. Oh, and diplomacy has some meaning now; it's no longer "hello, let's trade, war in 10 turns or less". Anyway, has anyone tried the One City Challenge? As it might suggest, you have to win the game with just one city. If your explorers get a free city from a tribal hut, you're allowed to reload the previous turn's auto-save game. But otherwise, you may NEVER own more than one city. You have to win through the space race, of course.Give it a try. It makes Wonders of the World like the Colossus and Shakey's Theatre much more important, suddenly.
  6. That depends what you mean by 'understand'. A question/answer lookup is considered not to be 'intelligent', it's just a database problem (sorry to any database hackers out there, but AI people look on databases the same way that hardware people look on software or Mac users look on Windows ). But then there's Eliza, which understood language syntax but little else. It pretended to be a psychiatrist by turning the user's statements into questions, and adding some bland statement whenever it couldn't. Example: (from Weizenbaum, 1966) Note the failure to provide a sensible answer in the last question. Did Eliza 'understand'? Most people say no. Do psychiatrists 'understand', or just play the same trick? Now the question gets difficult. The next stage of A.I.is usually considered to be the 'expert system' - one that has been provided with rules by a human expert, and can follow these rules. There have been some stunning commercial successes with rule based systems, but do they represent 'understanding'? Now some people say 'yes', because they clearly represent 'expertise, in some form. And then there's systems that can 'learn' - but in fact, all they do is look for patterns in data, of one kind or another, and turn those into decision making rules. Now we're coming close to what would be considered 'understanding' in a human child. Such systems can adapt to new circumstances better. Finally, 'artificial life' learns rules that aren't decision making rules, but rules of behaviour. It's conceivable that machines will be able to replicate ant behaviour in the near future. Which of these, if any, constitutes 'understanding'? Use up your post quota by telling me what you think, *and why*. There is a whole other strand of debate about A.I. as representing mental states (or 'mental models', if that term doesn't cause too many psychologists to throw up their hands in horror), but I'll leave that for another post.
  7. Has anyone taken courses/assessments to be come a Certified Ethical Hacker, or some similar qualification? I'm interested in what you thought of the course, whether the material taught was actually helpful in real life, whether the qualification has made any difference to your work, etc.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.