Jump to content
xisto Community

CaptainRon

Members
  • Content Count

    235
  • Joined

  • Last visited

Posts posted by CaptainRon


  1. Where can I get Vista beta from? I tried searching on p2p... any other methods?By the way, I believe that MS is planning for the next big jump, and that will be through Human-Computer Natural Interface. They are planning to build AI Based OS that will be able to talk to a user and do the deeds for him. This is what Bill Gates terms as the next big evolution in Interfaces.Actually even I am making my final year project on Human/Comp Interface, so really started wondering about the MS proposition. If MS does come out with a truly functional Human/Computer Natural Interface, I think it will make sure that the "casual user" base that MS holds today, would be still held for many more years to come. To counter such a threat, Linux world will have to make a move now itself.Since I am making my proj on .NET (which can evidently compile on Mono 1.0, tested it) I have a proj idea. Will post it on Antilost.com.


  2. A website that I visit pretty regularly, Sitepoint.com, today published an excellent introduction for Ruby On Rails (ROR). I, like many of the webdevelopers, have been terribly curious about this almost "magical, no fuss" web development language, hence the time was perfect for Sitepoint to come out with the article.Danny's article on Sitepoint gave a brief introduction, but moreso stressed and emphasized on the "ease of development" that ROR brings along.We've witnessed years of almost three decades of "hero worshipping" OOP techniques in software programming, and for a brief period with the onset of PHP5, we've witnessed the same in the web programming sector. Now with the introduction of ROR, this can only increase .. increase exponentially. =)And it is a good thing, this OOP, it is good!Now as my interest in ROR has surely surfaced, I visit Wikipedia to see what they have to say about this new magical utopian web programming langauge. And I must say the blokes at Wikipedia have done an excellent job maintaining the entry for ROR. It is definitely a must read for anybody even remotely interested.But what really caught my eye was the Philosophy Of Ruby On Rails. It adhere's to the DRY principly, Dry - Don't Repeat Yourself. Something I yearned for in PHP/Perl/ASP/Coldfusion, but like nirvana never could find it. If ROR can ever so remotely make DRY a practical principle, I will be the first to leave all and start 'practicing the ROR religion'.Another defining principle of ROR is - Convention Over Configuration. Which Wikipedia graciously explains as, and I quote:> > "Convention Over Configuration" means that the programmer> only needs to define configuration which is unconventional.> > For example, if there is a Post class in model, the corresponding> table in the database is posts, but if the table is unconventional> (e.g. blogposts), it must be specified manually (set_table_name> "blogposts").>Eh, sounds not too bad for a lazy inefficient web developer like myself, does it .. ;-)As I get all excited about ROR, I've finally decided to try it out on my little localhost tonight. Taking the plunge, metaphorically. I do hope ROR does live up to all this hype that its surrounded by and I've indulged in.


  3. I have tried many, and I never landed on something as great as Xisto.comThey not only provide a great Pro Package, but have a truly wonderful credit system!!!!The best hosting that i have had apart from Xisto, were, Aboho, Aushost. i was using Aboho Free account (no posts required) but it was only 5 mb and 500 mb b/w. Then I bought a domain, so went for aushost instead, which gives free hosting to domain owners. It gives 50mb and 500mb b/w. It's only good if you have a homepage, not a portal.No host provided enough features to host a portal, except :-) Xisto.I bet whoever comes to Xisto once, won't leave it... also, the Forum is a great place to socialize.


  4. OK, the other computer did not see the files, but do you know why ? In order to know what happens, you need to go to the Windows XP Disk Manager. It's reachable by going throug something likeStart->Settings->Control Pannel->Administration tools, etc..
    then, in the disk Management you should see both of your disks, and look it there are partitions on the second disk.
    If The disk manager sees no partitions, that seems that you lost all of your data.
    Also see if the disk manager sees the second disk with it's right size, else this means that you will not be able to simply use this disk.
    Regards
    yordan


    yordan dude, the disk management starts with the simple command diskmgmt.msc
    secondly, the method i suggested, takes in consideration that the mbr is gone (all partitions gone). marretas needs the files on the hard disk more importantly.

  5. Here is a sol, use diskmgmt to delete and then recreate all the partitions you had earlier and in the exact sizes. Like if you had two 10GB partitions, make 2 ten GB partitions. Do not format the partitions, if you do that the data is gone. Then use a software like Recover4All (any tool that can recover deleted files), and make it scan ur damaged HD's newly created drives. Theoritically it should recover most of your files.


  6. I will quote the article from Google's own site:

    https://www.google.com/technology/pigeonrank.html

     

    As a Google user, you're familiar with the speed and accuracy of a Google search. How exactly does Google manage to find the right results for every query as quickly as it does? The heart of Google's search technology is PigeonRankâ¢, a system for ranking web pages developed by Google founders Larry Page and Sergey Brin at Stanford University.

    Building upon the breakthrough work of B. F. Skinner, Page and Brin reasoned that low cost pigeon clusters (PCs) could be used to compute the relative value of web pages faster than human editors or machine-based algorithms. And while Google has dozens of engineers working to improve every aspect of our service on a daily basis, PigeonRank continues to provide the basis for all of our web search tools.

     

    Why Google's patented PigeonRank⢠works so well

     

    PigeonRank's success relies primarily on the superior trainability of the domestic pigeon (Columba livia) and its unique capacity to recognize objects regardless of spatial orientation. The common gray pigeon can easily distinguish among items displaying only the minutest differences, an ability that enables it to select relevant web sites from among thousands of similar pages.

     

    By collecting flocks of pigeons in dense clusters, Google is able to process search queries at speeds superior to traditional search engines, which typically rely on birds of prey, brooding hens or slow-moving waterfowl to do their relevance rankings.

     

    When a search query is submitted to Google, it is routed to a data coop where monitors flash result pages at blazing speeds. When a relevant result is observed by one of the pigeons in the cluster, it strikes a rubber-coated steel bar with its beak, which assigns the page a PigeonRank value of one. For each peck, the PigeonRank increases. Those pages receiving the most pecks, are returned at the top of the user's results page with the other results displayed in pecking order.

     

    Integrity

     

    Google's pigeon-driven methods make tampering with our results extremely difficult. While some unscrupulous websites have tried to boost their ranking by including images on their pages of bread crumbs, bird seed and parrots posing seductively in resplendent plumage, Google's PigeonRank technology cannot be deceived by these techniques. A Google search is an easy, honest and objective way to find high-quality websites with information relevant to your search.

    PigeonRank Frequently Asked Questions

     

    How was PigeonRank developed?

     

    The ease of training pigeons was documented early in the annals of science and fully explored by noted psychologist B.F. Skinner, who demonstrated that with only minor incentives, pigeons could be trained to execute complex tasks such as playing ping pong, piloting bombs or revising the Abatements, Credits and Refunds section of the national tax code.

     

    Brin and Page were the first to recognize that this adaptability could be harnessed through massively parallel pecking to solve complex problems, such as ordering large datasets or ordering pizza for large groups of engineers. Page and Brin experimented with numerous avian motivators before settling on a combination of linseed and flax (lin/ax) that not only offered superior performance, but could be gathered at no cost from nearby open space preserves. This open space lin/ax powers Google's operations to this day, and a visit to the data coop reveals pigeons happily pecking away at lin/ax kernels and seeds.

     

    What are the challenges of operating so many pigeon clusters (PCs)?

     

    Pigeons naturally operate in dense populations, as anyone holding a pack of peanuts in an urban plaza is aware. This compactability enables Google to pack enormous numbers of processors into small spaces, with rack after rack stacked up in our data coops. While this is optimal from the standpoint of space conservation and pigeon contentment, it does create issues during molting season, when large fans must be brought in to blow feathers out of the data coop. Removal of other pigeon byproducts was a greater challenge, until Page and Brin developed groundbreaking technology for converting poop to pixels, the tiny dots that make up a monitor's display. The clean white background of Google's home page is powered by this renewable process.

     

    Aren't pigeons really stupid? How do they do this?

     

    While no pigeon has actually been confirmed for a seat on the Supreme Court, pigeons are surprisingly adept at making instant judgments when confronted with difficult choices. This makes them suitable for any job requiring accurate and authoritative decision-making under pressure. Among the positions in which pigeons have served capably are replacement air traffic controllers, butterfly ballot counters and pro football referees during the "no-instant replay" years.

     

    Where does Google get its pigeons? Some special breeding lab?

     

    Google uses only low-cost, off-the-street pigeons for its clusters. Gathered from city parks and plazas by Google's pack of more than 50 Phds (Pigeon-harvesting dogs), the pigeons are given a quick orientation on web site relevance and assigned to an appropriate data coop.

     

    Isn't it cruel to keep pigeons penned up in tiny data coops?

     

    Google exceeds all international standards for the ethical treatment of its pigeon personnel. Not only are they given free range of the coop and its window ledges, special break rooms have been set up for their convenience. These rooms are stocked with an assortment of delectable seeds and grains and feature the finest in European statuary for roosting.

     

    What's the future of pigeon computing?

     

    Google continues to explore new applications for PigeonRank and affiliated technologies. One of the most promising projects in development involves harnessing millions of pigeons worldwide to work on complex scientific challenges. For the latest developments on Google's distributed cooing initiative, please consider signing up for our Google Friends newsletter.

     


  7. Uh... sorry for the garbled post. Initially i got the concept wrong. Now I have it straight.

    That's what the rel="nofollow" is made for, but it can be abused too, to make legitimate links not heighten their pagerank accordingly!

    Like you said, it can be abused for the decrediting of legitimate links, and I find your argument correct. And for that reason I suggest, let Google observe a rel="nofollow" link for a length of time before giving due credits. What I mean by this is, rel="nofollow" shouldn't mean that the credits are not given at all, rather suspended. By doing that, it can wait for the website owner to remove spam links, and also finally give due credits to a legitimate link after a period of time when it notices the link is still existing.

    I have made a similar suggestion at the wikipedia discussion. See the tech is needed for sure.... the question is how to prevent its misuse.

  8. Hey guys!!!Now here is something even more interesting... I dont know why it has NEVER been discussed here!Google uses PigeonRank Technology to find results so quickly. Can you believe it? It uses real Pigeons, and flashes thousands of pages in front of them, until they peck on one.Some one though mentioned this technology during the Google OS discussion, i never paid attention.I will create a new topic in Google Section. Check it there.


  9. Following the open source model, ProgrammerAssist.com strives on the free world concept. Absolutely simplistic website, no graphics for the matter of fact, and most of the features you require to have in a Question-Answer system.Register a free account and get started right on asking/answering questions. It's a very new website as of now. I got my .htaccess problem answered over there.I think people should check out the fine website, and also get registered there. Had a talk with the owner, Srirangan (a staunch open sourcist), he says its just an ongoing effort to create a free rival to the paid website Experts-Exchange.com Although there are very few participants as of yet, but we can together make it a success.


  10. Yes, thats the only thing that can challenge MSN Search, Google implementing Neural Networks too.But don't you think its simply an overwhelming thought of converting the whole of Google's present page rank database and web pages to adapt to a Neural Network system? Or probably they could give options to the searcher, "Traditional Search" or "Smart Search"... Plus consider the time taken to train the Neural Network.Either way, I say MSN should first get a top level domain name for its search engine.


  11. Now ignore it, or read it with an open mind.
    MSN Search has the most powerful and promising searching technology. It is based on Neural Networks and NOT on an Algorithm (like Yahoo and Google).

    Difference between Algorithm, AI Algorithm, and Artificial Neural Networks:
    1) Algorithm is a flow controlled logic, that works "Perfectly" IF implemented "Perfectly". It cannot adapt, and rests on the human brain to develop logic.
    2) AI Algorithm is the one, that can perform intelligent actions, based on situation and conditions. The overall flow controlled logic is always the same. It uses task completion algorithms like the "Hill Climbing" etc to accomplish a task.
    3) It is based on the Architecture of human brains, implementing Neurons. Neural Networks do not use algorithms, but generate results on the basis of inputs fed to the network. The network is interconnected neurons with weighted links.

    A Artificial neural network is a series of computers which are supposed to learn based on input provided.

    Think about that for a second – a learning computer. One that just doesn’t follow rules assigned to it (which is what the more traditional algorithmic search engines like Google and Yahoo! do) but actually can learn from its results.
    Essentially MSN search learns from input given to it. For example, if the search engine is told that Ebay is considered an authoritative site on online auctions, then when a person performs such a search they should see Ebay.com at the top of the search results.

    Upon analyzing Ebay.com the search engine can then learn why it is considered an authority and apply that learning to other sites to see if they are also authoritative.

    The biggest advantage of such a platform is the engineers at MSN can “train” the system to understand what is considered relevant and important and what isn’t. As time goes on we would expect to see MSN search become one of the most relevant of all the search engines simply because the system is designed to improve itself over time.

    Of course like any search engine, MSN could be tricked. If we knew what those factors were, we could create a page which could be considered highly relevant, based on the MSN search criteria but would in fact be a garbage page. However because of its ability to learn, the system could quickly adapt to such spam content and readjust rankings “on the fly” to filter out these bogus results.

    Another advantage to MSN is that the system should be infinitely scalable. Which means as the use of the search grows, it should only be a matter of introducing new hardware, or requirements into the system, having it adapt to the additions and begin using them as if they’ve existed all along.

    Therefore, as new spam techniques are developed, its simply a matter of training the system to watch out for the new technique, flag it as potential spam and even potentially react to it by filtering all sites using the new technique.

    By now you are probably saying “holy cow that type of technology must use a ton of resources” and you’d be correct.

    The amount of computational power required by such a system would be immense. Just the storage capacity needed to store what the system has “learned” would have to continue to grow. In addition, the system also has a great crawler out indexing more and more content all the time.

    It’s not your typical algorithmic based engine. With most algorithmic systems, the ranking algorithms are finite in size. With this system, one would expect the Neural Net to continue to grow as new pathways are created.

    Consider this structure as similar to a human brain – as we develop new thoughts and ideas, new synaptic pathways are developed linking areas of the brain to other areas where links previously didn’t exist. Essentially this is what a Neural Network does. While its pathways may not be physical, it does nonetheless develop relationships between previously unrelated sections.

    Therefore, the engineers at MSN have developed ways to “shortcut” the requirements for ranking. Essentially they have said “sure there are over 500 factors determining the page quality, but in this category only 150 are used, therefore you can use the same 150 associated with this category.”

    Overall, as long as Microsoft can continue to support such a system, I would think that it could win out in the “search engine wars.” The system appears (at least on paper) to be superior to algorithmic based systems, and appears to be able to adapt more quickly to changes on the web because it doesn’t have to wait for an algorithm change to adapt, it only has to learn of the change and apply itself.



  12. I have been shouting for long enough to all the idiotic Google lovers that the moment someone gets power, it tries to exercise Monopoly. Although yes, this tag can create a problem for the "legitimate" websites, but i think in case the rel="nofollow" will be checked by the google spider to be linking to a page from the same relative path, then it wont be that much of a problem. In short, the rel tag should be read only for a link to the webmaster's own page.For example, I can block my Guestbook entry links from being spidered, but i can't stop a link to, say "http://www.someone.com/show.php; being linked via "http://mypage.com/article.php;. Google can compare the relative domains in this case. In other cases, relative paths can be compared.I support this technique, provided the rel tag is checked against the link it is used for.


  13. See what u are over looking is the fact that "Content" or Data is being overlooked here.I just gave a brief scrap of what came to my mind. Let's say I get serious with this technique, I will make a more complex implementation.To give a small explanation:I will create a tree structure, just for a single page. When I say I will give more importance to the title tag, it means it will be the ROOT of the Tree. The H1 (or to be precise, any bold html that shows up prior to simple text) tags will come as nodes, and the content they discuss will come as child to those nodes. To simplify look up, the content is broken up into keywords which have a proper construct (like the way MS Word Grammar check does). These keywords are associated into a index table (just for that particular page, and in the specific subnode), with their occurence frequencies. Now since I said I will index only those keywords which follow proper construct, it will stop spammers from repeatedly wrting the same key word over and over again. After that I create a diversity factor. Usually, in previous case, a spammer could re-write a sentence with same keywords many times over and over again. To cut that, the diversity factor is calculated as a function of words in a sentence construct. It will also include non-keywords like (is that the them their etc), hence a unique paragraph with meaningful text gets properly credited.This along with frequency table will make the index table.This index table is then finally generated for the whole page and belongs to the tree structure. Such tree structure is generated for each and every page that is submitted, and then in the end these tree's finally become the part of the giant tree called the webspace. The way a page-tree enters teh web space is, it is categorically stored. Categories are created on the basis of keywords, and a page-tree can belong to several keywords (ofcourse), but are linked with weighted nodes, where the weight of the node tells that how prominent that key word is in the page tree.Remember that the keyword weight is a function of "where it appears in the page" plus the frequency plus the diversity factor. It can all become a complex mathematical equation if I sit down to seriously work upon it.But the point is... in a world dominated by Google, its impossible to outperform it. Look at Acoona... a real fine search engine with little future.


  14. Yeah ofcourse i put the file in my public_html ...

     

    The browser returns a blank page, since it's not supposed to return anything. They say create a blank page by a given name and place it in ur web space. I did that. Then they say the Web server has a security issue with the way it handles error pages.

     

    May be you submitted ur page before this flaw was detected.

     

    They say that the error page header should return 404 not 200. Usually a page that is reached without any inconvinience is marked 200. They probably don't want this for security issues.

     

    They also mention that they use the HEAD request (not GET) that also means the content of the page doesn't come in context at all!

     

    It needs some web server configuration for sure.

     

    Hey!!

    I guess i figured out the problem...

     

    My pages are not displaying error pages at all! and may be its due to the stupid Mambo Content Management sys that i have installed.

     

    That too i think its due to the .htaccess file that i modified for SEO friendly URLs. Well now who can set this straight ???

     

    when i am typing an incorrect URL it is taking me to my home page... whereas it should take me to the error page. :lol:

     

    Now how to get this straight?

     

    OK I figured out the solution too... The prob was with .htaccess file.

     

    my .htaccess looked like:

     

    RewriteEngine On

     

    RewriteCond %{REQUEST_FILENAME} !-f

    RewriteCond %{REQUEST_FILENAME} !-d

    RewriteRule ^(.*) index.php

     

    Now this last line was the error creating line. RewriteRule ^(.*) index.php

     

    It means that any file request in format of .* (read dot all) should be redirected to index.php

     

    Now no doubt that the Google site was getting a 200 in the page header.

    I removed that line, and now its all working fine. I got my site verified.

     

    Thanx to this awesome support site:

    http://programmerassist.com/

     

    One more thing:

    All mamo users should note that, they should replace that line the moment google has verified your site. Otherwise mambo will stop redirecting URLs properly


  15. You will under stand my reaction by re-reading the way Shiv criticizes MS.Abhiram, you are rite and even i mentioned it explicitly that nearly every one on this forum have began with MS until and unless they were born after 1998. See the fact is, I respect those, because of whom I am what I am.I can assure you that incase I began with Linux, I would give up all hopes of becomming a software developer and rather chosen another field, probably Air Force. And being an Indian, you will know how much the IT Sector matters...As i mentioned above, the reasons because of which I find Linux developer unfriendly. Anyhow, when we talk of Servers, Linux is the BEST.I also explicitly stated that Linux has contributed nothing to me, until i came into engineering, i came into engg because of MS.Being in engineering, I have realised how wonderful is the concept of Open Source. Matter of factly, I am myself an open source developer. I have released each and every creation of mine as open source.Apart from that, whatever we study as a subject, can only be practically understood under Linux. For example the Kernel. I dont know how the windows kernel works, and never will, but I can read each and every line of code of Linux kernel. Apart from that, when i study the FTP protocol, I can see a Linux FTP implementation with all its code, and bury in my mind the practical concepts.Linux is great and so is Open Source... But MS isn't that bad either! It definitely doesnt deserve the kind of bashing it recieves!You will know what I mean once you start programming on Windows and Linux both, and see where you excell more.


  16. This is the error that Google gives me while i try to verify my site. NOT VERIFIEDWe've detected that your 404 (file not found) error page returns a status of 200 (OK) in the header.The explanation for this error at google is: This configuration presents a security risk for site verification and therefore, we can't verify your site. If your web server is configured to return a status of 200 in the header of 404 pages, and we enabled you to verify your site with this configuration, others would be able to take advantage of this and verify your site as well. This would allow others to see your site statistics. To ensure that no one can take advantage of this configuration to view statistics to sites they don't own, we only verify sites that return a status of 404 in the header of 404 pages.Please modify your web server configuration to return a status of 404 in the header of 404 pages. Note that we do a HEAD request (and not a GET request) when we check for this. Once your web server is configured correctly, try to verify the site again. If your web server is configured this way and you receive this error, click Check Status again and we'll recheck your configuration.So I suppose none other than the admins can help me with this....


  17. To have it use like a Gmail drive, all you need is to contact the owner of the Gmail drive and request him to re-program his software to do so.

    Its pretty much the same concept. Sending the file as an attachment with specialized headers. What will matter is how much attachment size they allow.

    By the way how to use Gmail for site hosting???

    Hey guys check this!!!
    I came across this and its a warning to all the idiots using GDrive or GoogleFS

    http://blogoscoped.com/forum/22209.html

    some one please send me a 30gigs invite at neodimension at gmail dot com

    boetaw can you?

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.