Jump to content
xisto Community

kam1405241509

Members
  • Content Count

    70
  • Joined

  • Last visited

Posts posted by kam1405241509


  1. Does anyone use heatpipes in their systems? This is the only affordable thing I haven't tried yet, but there seems to be varying results I've read in reviews (from HDD coolers that don't make any difference through to expensive system coolers that look very nice .. if I had the money for them!).Also, another advantage of overclocking is the overclocked setting of FSB (using relatively more expensive low-latency DRAM) means the CPU gets more bandwidth than stock, which helps in some games.Finally, has anyone tried underclocking/undervolting? It's mainly to save on power consumption & noise/heat/etc, using similar techniques to overclocking but in reverse. It might be useful for car-pc's ... athough I guess most people would just buy a low-power CPU, mini-ITX .. or laptop!


  2. A side-point that might be of interest to some people here is that 3Ware make their Escalade series of SATA controllers .. aimed specifically at SCSI-like performance. They have benchmarks on their website/PDF as do THG (Toms Hardware Guide).I remember reading a comparison review of SATA controllers, incl some onboard motherboard ones, and there was a big difference in performance. This is one of the reasons why SCSI is often looked at as being much much better than SATA, but this is going to be the case when comparing a 5Dollar motherboard chip and a 300Dollar SCSI PCI-X card!Still, I do think SCSI gives a "smoother" experience during high-CPU-loads or serious multi-tasking (e.g. running several VMs at once).


  3. While the SCSI vs IDE thing is sort of a moot point for modern ultra-fast computers, SCSI is good for PCs with less powerful processors

    ...

    I get MUCH better transfer rates using SCSI than I do with IDE, even though the drives are theoretically the same speed.

    ...

    1064303864[/snapback]


    I agree. SCSI really comes into its own when you have large amounts to transfer. IDE will have to use the CPU alot to send that data. SCSI also has more commands to cope with more complex situations. It is also especially useful when streaming several bits of data from different parts of the disk, but that's also partly to do with having large disk & controller caches, as well as really decent controllers themselves.

  4. Forget scsi.  It takes forever to boot.  I used Cheetahs for a while, but Ide are just great.  I do video and audio editing and I use Ide drives.  I have no issues at all.  Actuall, My cheetah drive was only 1 year old before it crashed and i lost all the information.  I have never had an Ide drive completely crap out.

    1064318281[/snapback]


    I'm not sure what you mean by it taking forever to boot. If you mean the SCSI controller doing a search of the SCSI bus, then this is a controller BIOS issue (I think I remember modern LSI BIOS's could be set to check only specific ID's, and you could disable channels etc too .. I can have a check later next week if someone wants to know for sure).

     

    If you mean throughput/access times ... I found my 15k.4 was much faster at booting both Windows & Linux cf my WD 250GB SATA drives.

     

    The only downer is that the caching algorithm is aimed at server (parallel) task machines, so WD Raptors do better for desktop tasks in this scenario (single threaded). However if you have an SMP machine, doing many things at once, my feeling is that SCSI is useful here .. but that's a subjective assessment!


  5. SCSI disks are also made for servers, they can work actively 24/7, no IDE would be able to do that w/o errors or failures. SCSI drives can also work under much more stress (no IDE drive would survive 85°c working temperature (air-co failure in small server room)), plus there are also some gimmicks wich in SATA terms is called Native Command Queuing, it can help speed up things, but hardly for the average users.

    Ow, and other thing, the fastest SCSI drive spins at 15.000rpm, while the fastest,  SATA/IDE drive spins at 10.000rpm (WD caviar). All other IDE/SATA spin at 7200rpm, wich gives much higher latencies then 15.000rpm (ow, and 15.000rpm is noisy).

     

    Hope this helped a bit more.

    1064319276[/snapback]


    And SCSI will soon make the leap to 22k RPM. I found a few pages refering to these drives, but they are still research/lab prototypes! My guess is we'll see them when drives move to perpendicular magnetic recording techniques.

  6. I once had a P3700 that was pure SCSI because I hosted a gaming server for a "clan" I belonged to for Q3.  On the very rare occasions I had to reboot it I honestly didn't notice any real difference between its boot speed and a similar computer I had, it had very similar specs except it was IDE based.  Now when I quit hosting the server after my main computer went to computer heaven and I canibalized both to have a "Hybred" IDE/SCSI computer I then did notice a major boot speed difference since trying to run SCSI/IDE on the same system is like walking along the top of a picket fence at times.

     

    Only then did I notice any real problems.  I would still take a full SCSI system over IDE if I could afford to build one, even with SATA coming out I'd take the SCSI.

    1064319281[/snapback]


    The next-gen SCSI is called SAS. You can plug in SAS and SATA drives into a SAS controller to get the best of both worlds (cheap large SATA data drive + 15k RPM SAS boot drive). SAS/SCSI's TCQ is more advanced than SATA's NCQ in terms of queue depth and more advanced caching algorithms for parallel tasks.

     

    For parallel data transfers with low CPU utilisation, SCSI rocks.


  7. AFAIK you don't need a hardware modchip since there are several xb BIOS software patches online. This has a few legit uses such as installing Linux on the xbox so I guess it's OK to discuss those things here on Xisto. There's no need to open up the xb. I don't think it's illegal to do this, or at least I don't think it should be, but as with most things, there are very non-legit uses of it too, so it's probably best not to discuss it further here on Xisto.


  8. Yes, Safe Mode is now an SVGA mode.  It used to be a vga mode.  On that note, when you press the f8 key and windows lets you boot into "VGA Mode" why is it called vga mode?  It's 640 x 480 x 256 bit color at worst, which is already SVGA.

     

    ~Viz

    1064323062[/snapback]

    He he, I thought of that too a while ago. I never found an answer, but my guess is that they think of VGA/640x480 as a subset of SVGA/800x600 (although it isn't a nice divisor at /1.25 ... I guess maybe that was the naming of Super-VGA too .. as a "Super-set" B)). That's the only thing that makes sense to me ... but only at a long stretch of the imagination.

     

    On a sidenote, how VGA itself came about is explained nicely in some slashdot clippings at https://en.wikipedia.org/wiki/Talk:Computer_display_standard

    It's so long ago, or I was so little, that I'd forgotten all of these "standards" :P. And now we're at HUXGA-W .. which doesn't even have any monitors at that resolution yet!!


  9. Well, I really see no reason to overclock...as as for using the computer in 10 years??? I have plenty of computers i still use that are ten years old, and I'm bloody stoked that they still run! With the speeds of processors these days, I see no real reason to overclock, of course, if you wish to, go right ahead. I OC'ed my AMD AXP 2200+ from 1.8 to just under 2.0 GHz, and I saw little to no improvement that actually made a difference. Yes, the numbers changed, but nothing really performed better. It was fast enough to begin with.

    1064323055[/snapback]


    He he, my BBC Micro Model B & Amiga A500 still run ... but I don't really use them any more (may be once a year .. for nostalgia's sake) so I don't think they count! What do you use yours for, BTW? I'm guessing typical uses are for a HW firewall, fileserver .. or do you have anything different (home automation tasks etc may be)?

     

    I do totally agree, though. There's absolutely no point in buying a fast CPU expecting it to overclock well .. I've never seen that happen (unless you use phase changing & are very very lucky!). The key is to buy the SLOWEST/cheapest processor at the LATEST manufacturing process .. and then hope for the best.

     

    I should've been a bit clearer in stating that my current PC is a 2-way that's watercooled and isn't overclocked (it's multiplier locked by AMD .. but it's fast enough since I bought it knowing about the lock!). I still prefer to watercool for the quiet, reduced size HSFs. Water is a better thermal conductor than air .. air is actually an insulator! BTW, early Crays were liquid cooled (they were submerged in 3M Fluorinert) .. but there's no way I could ever afford that liquid! It can be fairly reliable (with safe guards .. even a "left alone" remote server.

     

    Nowadays, CPUs are pretty cheap, though. I would only overclock if I was using the machine for gaming (the only app I can think of that really benefits from single-threaded high-clock rates .. all other high-end apps are either media related for which I'd just use MPEG codes within a GPU, or SMP apps) and if I was specifically aiming to do so (by buying a relatively-slow, rev-E 90nm AMD chip, for example) .. where I knew I'd want to go above spec. AthlonFX's are aimed at this market, but they are so expensive and they don't overclock as well as some of the more famous chips (Celeron 300's were the first popular ones, P4C was my most recent try).


  10. AFAIK, Windows apps look to Windows to see what monitor refresh rates are available. So Windows seems to be telling the app there is a rate available that really isn't. My only guess is that maybe you've not ticked the "Hide modes that this monitor cannot display" so apps only get sensible settings from Windows.In Linux, you can actually manually specify these rates .. it's in a config file. There is a similar file for this in Windows ... it'll be an .inf file, something like ibmLcd.inf! Another option is to set specific refresh rates in a very cool app I used a while back called PowerStrip (http://www.entechtaiwan.com/). Pretty much the only way to get Windoze to play nice with high dpi LCDs!Finally, have you tried to see what happens when you go into safe mode (I think that's SVGA mode) ... and then launch the app ... may be that will restrict to a certain set of refresh rates too. Not useful I know, but at least you'd then know it was this "list".


  11. Another option could be to get a SFF (small form factor) desktop machine, as a compromise. The downers are that there are only a few PCIe slots available, and that you might have to lug around a separate screen (although you could build it into the case exterior), and that you'd need a power supply point (although Intel research demo'd a PC connected to a car battery a few weeks ago). If you didn't mind lugging around a small cube, this would let you have your cake and eat it too.Personally, my needs are for decent no-compromise performance in 1 main place/PC (i.e. a desktop that can handle everything I need to do, computing wise .. e.g. high-end gaming), plus a crummy low-end small laptop (ultraportable etc) for use on the move. I then connect back to my main machine from work over the Net to tell it to do something high-end if necessary (so I don't really need a desktop PC at work anymore .. I just connect back to my home machine (which is more flexible anyway since I'm the main/only admin on that box .. whereas at work I would need to get permission to do many things, and I'm not allowed to install shareware etc which is annoying!). On the UPC I mostly just need to do some emailing & word processing etc whilst on the move!


  12. lapotops are upgradeable nowadays, it's only a bit harder and more expensive to get the new stuff.

    CPU's can be upgraded easily and (ok, I'm not too shure about this one) not-onboard vid card (eg newer ATI and NV cards) can be swapped out too.

    Memory can be replaced, hdd too.

     

    The hdd is a bit the weakest point of a laptop, they realy lack speed (especialy budget minded laptops).

    1064321232[/snapback]


    In theory, and I'm pretty sure eventually someone will do it, we could have enterprise SAS (next gen SCSI) drives in a laptop, since they make them in 2.5" form factors that laptops use. These are 10k RPM (10,000 RPM) which matches the fastest "normal" desktop drives (WD Raptors are 10k RPM too). But I agree, most current laptop drives are 4200RPM, compared to most desktop ones at 7200RPM!!

     

    As for the expansion-ness, one weakness is the limited number of PCI expansion slots available in laptops. This too will soon be a historic problem since the PCIe v2 spec includes information about external PCIexpress connectors .. I'm sure we'll soon be able to plug in several PCIe cards (incl perhaps desktop graphics cards) into some sort of docking station (Intel had some concept demos of this in one of their 2003 or 2002 IDFs!).


  13. Keep in mind, IBM and Intel/AMD, make completely different types of processors. I personally prefer IBM's offerings. IBM is behind? IBM's POWER architecture is much further ahead of Intel's...Keep in mind that IBM is known for making thew world's most powerful computers. In fact, the current recordholder IS an IBM machine, using POWER architecture. Intel offers nothing that even comes close. One thing you may be right about, the 970 series may not be their best product to go after (although I definetely wouldn't mind getting my hands on a 970 or two myself...), but don't forget what IBM is developing right now...Cell Architecture, which will blow the doors off of anything Intel has to offer. It's not like Intel is making major breakthroughs with putting multiple cores on a chip, it's been done for quite a while now. Any chip manufacturer can do that, and have been able to for the past ten years. And as far as IBM is behind Intel and AMD? Let's not mention that x86 architecture has been around since the seventies...

     

    Or that Microsuck has phased out Intel chips for the new upcoming Xbox 360, in favour of PowerPC chips... Intel has been too comfortable with x86 for so long to abandon it, and every other chip manufacturer took a big step ahead of Intel back in the early to mid nineties...

     

    Cell is coming....you better watch your step, kid.

     

    (I <3 IBM)

    1064321522[/snapback]


    I'm really interested in this too ... but I came to a different conclusion ... I'm happy to be proven wrong though :blink::blink:.

     

    1. Supercomputers are about interconnecting 1000s of CPUs

    Liking a chip architecture just because you like a supercomputer or mainframe doesn't correlate well. The latter are all about fast, custom interconnects between high-FP performing CPUs ... it's not just the CPUs themselves. Getting a bunch of Cell chips connected on a gigE LAN isn't going to match a supercomp .. no matter what Sony's marketing dept says! I'll show you what I mean: netlib has some benches on specific CPU architectures for linear algebra computations (LINPACK) ... the ones at the top were NEC's awesome vector processor scored 2177 Mflops (n=100), 3.6GHz Xeon 1MB = 1821, 1.9MHz POWER5 scores 1776, 1.6GHz Itanium2 = 1765, 2.2GHz PowerPC 970 = 1681, 2.6GHz opteron 852 = 1593, 3.2GHz Nocona Xeon EM64T = 1593, IBM eServer pSeries 690 1.7GHz = 1462, IBM IntelliStation POWER 275 1450MHz (POWER4+) = 1245, IBM eServer pSeries 630 6E4 1.45GHz = 1229, Cray T94 (4CPU) = 1129. All these are for the SINGLE CPU scores! Of course individual CPU scores are meaningless in supercomputers ... what matters is how several 1000 of them compute the linear algebra problem! Also, pricing is really important to me. It doesn't matter that Itanic benches 250MFLOPS faster than the faster Opteron if it ends up costing me more than twice as much .. plus the extra air conditioning etc required! I have seen a guy with an Itanium desktop, BTW, but that was before AMD starting to win 64-bit marketshare!!

     

    I'm not sure where you get your stats about IBM from but, according to top500's latest report, 66.6% of the top500 supercomputers in the world are Intel based & 5% are AMD! 10.4% are Power & 5% PPC. Specifically, 35% are P4-Xeon/15.2% EM64T, 15.8% Itanium2, 5% Opteron, 4.4% Power4+, 3.2% PPC440, 3% Power4, 1.8% PPC, 1.6% Power5, 1.4% Power ... However the top 2 machines are IBM Blue Gene machines (one of them is at IBM T.J.Watson .. not really a "sale" :blink: ). #3 is a Itanic2 machine. Earth Simulator is now #4, and the Virginia Tech MacOSX cluster is now #14. There was an article at the end of last year stating about Opteron "winning three of the top 20 spots: the Shanghai Supercomputer Center's Dawning 4000A at #10, the Los Alamos National Laboratory's Lightning at #11 and the Grid Technology Research Center's Super Cluster P-32 at #19" .. things have changed now but will obviously keep changing again.

     

    If you stick 1000s of POWER chips together, I'm sure you'll get a great supercomp, but I can't see how FP performance improves that much when we're talking about the same number .. I'm guessing you're talking about may be a couple of CPUs in a cc-NUMA or SMP workstation .. or maybe a cluster (please correct me if I've assumed wrong, BTW).

     

    2. X86 is less of an issue these days

    The x86 architecture is ugly compared to RISC designs, no doubt about it, but these days RISC chips have CISC add-ons, and vice versa (SSE3 etc)! Also, age is sometimes a good thing .. they've had lots of time to try to fix some things ... at last HW partitioning is doable :P. Intel did try to abandon x86, with Itanic/IA64 ... unfortunately v1 cost a bomb without enough of a performance increase to x86 apps!! It was aimed at FP performance (it did well in this, see above), and for being a proprietary UNIX system killer, but the developers ignored the masses and the masses of software written to that (and the fact that closed source software won't simply be recompiled overnight, as it were). Mass production reduces costs. X86 is open to all willing to compete, and hence we have the nice current situation of AMD & Intel pushing each other to improve. AMD came in understanding this and took over this slot, giving 64-bit memory addressing & integrated-memory controller scalability benefits without throwing the baby out with the bathwater.

     

    Personally, I couldn't care less what architecture my main PC is, since the majority of my important code (i.e. for work) is my own and can be compiled to whatever happens to be i. fast & ii. cheap :blink:. The next most important other stuff are commercial x86 binaries for NT/Linux (some maths packages). The rest is opensource apps or win32/win64 on x86/x86-64 games (not as important since I'm happy to have a separate machine for those). It would be nice to have the good old days where there was tons of competition, I guess :blink:, but not just for the sake of it. I want there to be bang for buck. If I can get a massively popular (and therefore potentially cheap) x86 CPU that has good FP performance (like the Opteron in the above benches), then I'd go for that rather than a POWER/Alpha/etc that's similar in performance but with a slightly higher price tag .. unless there's a massive improvement (I'll get on to Cell in a second!!). Also the fact that I can run my CAD apps helps. BTW, did you know that IBM's official CAD app actually only runs natively on WIN32 ... for POWER it actually runs a WINE-like library layer ... that was quite a depressing shocker!! They advise you to run NT B). But that's another matter!

     

    Mind you, soon even this will be irrelevant! The instruction set is just one part of the equation. Look at what the now nearly dead Transmeta (amongst others .. there was this Russian design, plus several software companies are doing research similar to this) managed to do ... x86 code could be compiled into a very parallel VLIW underneath without losing that much in efficiency (if they had the manufacturing links they could've done better ... intel looks like it is going to pick up the gauntlet and go down this direction, taking aim at their own StrongARM CPUs according to the latest IDF because they don't want to pay ARM royalties anymore ... I actually love ARM chips, but that's another matter!). Intel claim they will soon bring out an ARM-like efficient chip that's x86-compatible .. or at least that's their aim .. I'm not entirely sure that's really possible, but anyway!!

     

    3. IBM/AMD & Sun/AMD, Apple/intel, Cell

    IBM itself isn't a good reason to vote for PPC, because IBM and AMD partner around chip fabrication, although IBM are now aligning themselves to Intel instead. Also, Sun & the startup they took over are helping AMD design very large-scale 64-bit x86 systems. Apple have given up on PPC and gone to x86 because the G5 hasn't had the expected speed bump for ages. IBM are focused on Cell, and Cell for PPC apps is damn slow.

     

    I was really looking forward to Cell. It promised a focus back on pure uncompromised performance ala Seymour Cray & the DEC Alpha guys. But I've recently been reading a lot of negative press from developers using both PS3 & XB360 dev kits, and these are the 1st guys programming the machines. The former is basically a Cell workstation, and the latter is a modified Mac running at x% final performance (x being a fraction, but they can figure out final performance from that so it's not an issue). In both cases the developers are very disappointed .. so much so that many were prepared to publically complain. The main downer claimed is the limited amount of cache available to each of the Cell's VPUs, and the poor real-world performance of the multitude of VPUs given that they can't get enough data into them with such a puny amount of cache, and ditto for the low-clocked PPCs given that they weren't designed to do much! They were hoping to have a leap in CPU performance for AI/physics ... but programming multiple CPUs effectively is extremely difficult. Carmack's SMP quake could sometime actually run slower! Sure, you can say the developers have to figure it out .. I think it'll be a very long time in the making! Imagine using a PC with quad PII's rather than one fast P4 for gaming!! A whole load of algorithms are going to have to be figured out and rewritten in parallel with no guarantee of being any faster once the inter-process comms overhead is accounted for.

     

    The reason these consoles went for PPC is because IBM enticed them with THEORETICAL performance-per-buck. This is great for marketing etc. Yes, there were demos of rubber duckies in bathtubs, and multi-stream HDTV decoding ... but in terms of actual game footable, Sony didn't show much .. and had even lied about some of the footage (later found to be prerenders)! A PC with PPU will beat probably them. The same thing happened in the previous generation, with Sony claiming supercomputer performance ... I still don't see games that look as good as those demos! If developers can't keep those parallel cores filled with the approriate data, performance will suffer. A good analogy is the P4's long pipeline that has a whole load of smart compiler tech to decide which data should be loaded next before it's needed ... predicting the future is never easy!

     

    So right now I'm not too sure about Cell. I do hope these complaints are just initial problems .. since I'd absolutely love a cheap supercomputer on a card for maths .. I could then avoid going to my lab so often B). But I have a feeling these guys are right .. they know what they're doing & wouldn't simply complain against MS/Sony just for the hell of it! They are putting their neck out doing so!

     

    BTW, have you guys seen the stuff on Sun's Niagra? I'm looking forward to it .. there was a cool article in IEEE Spectrum magzine a few months back. If it's really as good as they are saying (one of their research prototype chips can supposedly match a 16-way Opteron on FP tasks, and they are talking about placing 128 of the cores on a single chip ... but maybe that'd be 2010!!). As with all things these days, I'll wait to see the real-world benches before making any final judgements! If there are any other cool CPUs anyone knows of, please put up a list on discussion. Others I know of are things like ClearSpeed etc ... but they are so damn bleeding expensive, I can't take them seriously. If you have a limited space requirement for supercomputing performance, then it's one of only a few similar companies doing this!

     

    I'm not a pro-anything person. I have intel & AMD boxen, and nearly bought a G4 on a PCI card (didn't due to the poor performance of the cheap ones I could afford/justify!). I also (have to!!) use several OS's incl Windows, Linux & Solaris .. and even OSX has a few apps I really like (compiled to PPC .. I use my labs G4 notebook every so often!). It would be nice to get a definitive answer from someone who's used the above processors in serious workstation/cluster setups.

    Kam.


  14. Intel/AMD bin processors at specific frequencies based on the quality of that run. However based on demand/supply, they can often end up sending out higher-frequency-binned chips at lower clocks ... and that's when we end up being lucky. The P4C variants are one such famous batch. It often occurs when the manufacturer goes to a new smaller process (eg from 110 to 90nm), and at that point the CPUs are running at a lower TDP (temp).At the recent IDF, intel actually stated that even their business motherboards will soon have a mild overclocking option. Everyone's doing it these days .. it's not dangerous if you don't go over the top & if you take the precautions mentioned in other posts about temp monitoring & auto shutdowns etc!I've got several setups. My older ones are air-cooled, and I've gone off these since the current/future HSFs look like freaking giants .. and I don't like the idea of wasting that much space just on cooling a tiny chip. Water cooling is much more efficient, and my current setup is this. It's not dangerous, again, if you plan things out. I use non-conducting deionised water, with some added coolant, so if there is a leak, I don't get fried chips :blink:. My pump is a submersible one, so filling it up is kind of a pain, but once everything's setup, it's great ... just hope I don't have to change PCI's in a while (my graphics cards are also liquid cooled .. so it's a real pain to change). The advantages are that it's a much smaller setup (I just have one main fan cooling the liquid, another cooling my drives though this might move to liquid cooling eventually, and a few at the rear of the case incl the PSU). My current set up is more quiet than my older air-cooled PC even though this one has 2 CPUs .. the 1 CPU PC actually gives me a headache .. well not quite but it's very annoying once you're used to whisper quiet. One of my older setups was a VapoChill ... but that's even more of a pain (for a lot of gain!). The biggest downer is that once greased, it's nigh on impossible to degrease the CPU .. so you can't easily sell it on unless that person wants to subzero cool it aswell!!Finally, there are several off-the-shelf water-cooling kits out there, like CoolerMaster's AquaGate. I was initially interested in this, since it'd in theory be very reliable coming from a company like CM, until I found out that the pumps were actually within the CPU waterblocks. Not only does this add bulk that I was trying to get away from, it also meant stupid numbers of pumps in the system if you wanted to cool your GPUs too etc. It would be nice to have an "all in one" solution that fitted in a drivebay, and maybe some of you might be interested enough to give it a go. For me, I prefer separated simply because it means I have full control of the design, and that it can be very minimalistic!Post-finally!! Also note that there are TWO different sizes of hoses, if you are going to go down this route, and your waterblocks etc ideally should be compatible (otherwise the flow is less efficient etc, obviously).I can go into more detail of my specific parts, if anyone's interested in watercooling. And I've got a whole load of guides on it too. I remember THG (Toms Hardware Guide) had several intro videos on this that was useful, plus several guides on parts too.Personally, I can't see why you wouldn't overclock a chip .. so long as you test out where it is stable (do a 24 hour test running q3ademo etc). And, within a few years, it'd basically not be your main PC, so you can then start using it at the "normal" rate .. so it'd last a few more years as your fileserver etc B). So what's the big deal?! I've overclocked 3 PC's, and none have ever died on me. The only PC to have ever shown any problems is my ancient (13 year old) 486's floppy drive & ethernet controller (which is still being used as a dumb terminal :P )!!Kam.


  15. May want to consider Python.  Take a look at BLender3D, which has an intergrated game engine to create 3d based games.  Also blender can be used to export to the crystal space 3d engine. 

     

    Seems like there are alot of game engines, at least in Opensource land, that are using Python more and more.

    1064318666[/snapback]


    Totally agree. There are some amazingly good opensource 3D game engines incl Crystal Space, and a lot of other useful libs that use it (e.g. for animated character controls, etc). To me, this is the main benefit, since you can be up and running very quickly, yet all the source is there for you to tweak it once the basics are mastered.

     

    If you're doing something in 2D, Java is fine. In 3D, I wrote some code ages ago using VRML2 with Java. And now there's Java3D. Both are very easy, and have a lot of programmability. I've seen some impressive 3D Java games, with racing game that looked like 1995's Screamer. The only downer is that performance isn't near C++, unless you're prepared to spend a lot of time tweaking, and in the end you'll probably only get to 1995's graphics standard. There's not been that much done in J3D for a while now, probably since most people use D3D/OGL instead. Also, there are books on performance optimisation for Java & C++, and there are some tools to compile down to binary .. which defeats the purpose of Java but saves time in trying to speed it up!

     

    http://rss.slashdot.org/Slashdot/slashdot/to?m=256

    and

    http://forums.xisto.com/no_longer_exists/

    is a fairly recent discussion/article on python scripting for game engines. Some commercial engines even have Python scriptability :P.

     

    I've also played around with UnrealScript in Unreal Engine v2. But I ran into a brick wall and decided to DIY something simpler yet customisable. If I get the change to start from scratch again I think I'd most likely go down the above route with Crystal Space 3D B).

    Kam.


  16. I have a Logitech trackball (with the thumb on the ball & finger on the buttons, rather than fingers on the ball .. I much prefer it this way .. though I know several mates who far prefer the trackballs aimed at forefinger-based use!) ... along with an MS Optical IntelliMouse. I even have a small keyboard with integrated trackstick but that's annoying and only useful on a couch for media-type-apps (PVR).I've tried using both in flight sims, and hated it. A proper stick (it doesn't have to have forcefeedback) is the only option here IMHO.I've also used them in FPS's. I get better scores when using the mouse, but I still prefer the trackball, simply because there's less movement AND faster/longer movement (no need to lift up the mouse at an edge etc)!! The trackball isn't useable in FPS's unless I set the y-axis to be very insensitive compared to the x-axis. So ideally, you need an optical sensor that has a high resolution (as high as possible). As far as I know, there are no high-res "gaming" trackballs, though there are many gaming mice .. this is simply because the majority of people buy (and presumably prefer!) mice for both gaming & productivity apps.The trackball comes into its own in RTS/RPG/adventure games ... especially C&C :P, because you can scroll a massive map by doing a quick flick of the thumb & letting the ball roll on & on for a short while.The things I like about mice (the optical/laser ones, not ones needing mats) is that you effectively can setup specific points on your desk for specific relative angles in the FPS. This is the closest there is to a VR-like headset tracker! And this is where trackballs are at a serious disadvantage (if you use your mouse in this way). Personally, I don't play like this .. I simply think about moving the "aim" crosshair a "bit to the left" etc. In other words, relative rather than absolute positioning.For ergonomics, my personal view is that if you move less it's better. This is why some people prefer UNIX to Windows as there's less mouse-movements required in a less-GUI desktop environment .. one that can be setup for a number of keyboard shortcuts and scripts (thought WSH & Vista are aiming to go more towards this approach!). You should also have a decent amount of desktop space for your arms to rest at 90-degrees .. another common statement .. 90-degrees at the knees, elbows, etc! Finally, ergonomic keyboards (like my MS Natural one) basically force the user to stop moving their arms, and only move their wrists etc. There are papers that say this is bad, however (hence Logitech made their recent totally-flat keyboard!). There's scientific proof in both directions, so my guess is that again it's really dependent on the user. If a user presses down hard on the wristrest of a Natural keyboard, then there's likely to be problems. The research I mentioned above argued that it's best to have an almost straight line from finger-to-wrist-to-arm .. ie no bends .. but I do this with my ergonomic keyboard by adding extra lift at the wristrest, and by simply not using the wristrest .. I prefer to point my fingers&arm towards the keys ... can't really explain without a diagram, but I hope that's atleast vaguely clear B):blink:!I think each person will have very different tastes when it comes to trackballs-vs-mice (the arguments were there even a decade ago .. though they were less about FPS's then, of course!), and you really do need to try it out in the shops first (and then order online :blink:), rather than depending on luck or others' opinions to get your ideal HID.sorry for the long verbose messages I'm posting .. old/bad habit of mine!Promise the next one's will be shorter & more concise!!Kam.


  17. Hi WeaponX :-),

    like vizskywalker said, you don't want a DC to AC adaptor if you can avoid it. Ideally you should try to find something that plugs into the cigarette light (12V on current cars ... soon future cars will move to the 40V standard but I'm not sure when that will happen .. there was something in an IEEE Spectrum mag a few years ago about that).

    Anyway, if you can't find the above, then get what I bought from the UK shop Maplin (I'm sure any electronics store in the US will have the same parts .. maybe there's an RS or something in the US?) .. what you'd need is a 800W soft start power invertor at 12V. It will look like a box that has a standard "wall" outlet in it that you plug your laptop/PC's power plug into, and then a cigarette lighter socket at the other end. Mine converts 12Vdc to 230Vac .. and it runs my car-PC just fine :P. But as with all convertions, there is some efficiency loss (mine if officially at 600W, but my PC cuts off if I actually go to 570W load).

    Finally, your question about power consumption. You can test your loads with a watt-meter before actually trying it out in the car etc. Also, you're in a car, so if you don't mind a little extra fuel consumption, you could always just carry an extra batt in the boot. Finally, I've never had a problem of running out of juice. My car (and I think most cars) trickle charges up the battery when you drive and are not using it (I only use it for GPS, and some mobile emailing/documents/etc nothing serious .. it doesn't really go over 100W .. but that's because it is a low power PC). AFAIK, there's no serious limit to recharging lead-acid batts, and they should last for ages :-).

    You might be interested in other car PC projects (google for car pc), and for other low-power small PCs (google for mini-ITX and nano-ITX).
    http://www.mini-itx.com/ is one of my fave sites.
    Also, there was a recent IDF demo (the one that was just held!) where an intel researcher (Bill Sui) connected his PC to a car-battery, and he'd written some UPS-like code for it etc. He had another cool demo: "Sweet, huh? If that's not cool enough, the front of the machine also sports a hard button that enables the PC to roll back to a safe point upon a problem with the machine - effectively wiping the hard drive and rolling back to a stored disk image. It's kind of like System Restore Points in hardware, on steroids." (http://www.bit-tech.net/news/hardware/2005/08/23/intel_pc_car_battery/1; http://forums.bit-tech.net/showthread.php?p=1049296).

    In that latter URL are 3 relevant posts .. "Believe it or not, but a car battery will actually power a typical computer for longer then a lot of the more expensive UPS's actually do. Friend of mine runs an inverter in his van, and with his system (2.8 p4, 2 hdd, 2 optical, 6600gt, 350watt psu) and his monitor (15"), and the battery will last (with the van off and the dome light on) roughly half an hour or more."

    and
    "Powering a PC with a UPS is very inefficient, basically because the UPS circuitry goes from DC to AC (inefficiencies in the inverter) then the PC goes from AC back to DC (with more inefficiencies in the PSU). What Intel have done is strip out the conversion stages, and provide 12V straight to the power circuits (which is the same thing that happens with SFF PCs that use an external power brick)."
    The external power supply the above mentioned is at the mini-ITX.com's site's shop ... very very cool (this is what I ideally want to get eventually!!)

    and finally "What's so inovative about this? There are plenty of DC/DC power supplies for PC's out there.. One such supply is a 90watt dc/dc 6-24v input for 69usd. There are actually many companies that make atx spec dc/dc psu's out there."
    There's a link to an American shop selling the above which might be useful for ya B). You didn't tell me what power consumption your laptop is, but my guess is that the 800W invertor route is the least elegant (but is very flexible if you want to run other appliances off of it!!).

    hope this helps & was fairly clear,
    if not feel free to ask more details .. car-PC's are definitely a lot of fun to setup (er, and use :blink:) .. and eventually I'm sure everyone will have one :blink::blink:,
    Kam.


  18. Oh wow...you guys must really be in the dark. PS3 will be the way to go. Why? IBM has their fingers in it....And while the xbox 360 will use IBM PowerPC technology, Sony will be using IBM's next-up technology, the Cell processor. Sony will definetely have the hardware advantage over the xbox 360. I'm not too sure about revolution, I haven't heard much about it. I'm not much of a gamer...but I am dying to see what exactly Cell can do...it's supposed to be the next technology that will change computers drastically different than as we know them now.

    1064322908[/snapback]


    Sorry if this is a bit non-gamey/technical (I'm a programmer that write code that somewhat similar to games .. best if I don't explain it as it'd bore you guys to death!!), and if it's a bit negative (I'm naturally pessimistic & would love to be proven wrong!!). I used to love PC and console games ... but lately I've begun to prefer PC ones ... not sure why that is & I'm guessing maybe it's just me changing rather than there being any difference in the games themselves.

     

    Everytime Sony comes out with a new console their hypemachine, er Marketing Dept, claim they are aiming for supercomputing-level entertainment. Logically there's no way a 500 Dollar console will ever compare to a 1000 node scientific cluster .. we saw that with the previous generation (hardly anyone created a supercomputer from a bunch of consoles .. a few groups tried but got lame performance cf a modern PC cluster)! Nowadays, even Cray are selling X86-PCs (albeit ones with many CPUs in them tied together in a very very fast custom network bus ... and with a bunch of other expensive custom chips specific for maths/physics ... similar to the VPU idea that Cray started but with more local memory, with full access to system memory and to some extent with greater programmability in the sort of maths that can be done).

     

    The new way of thinking about computing isn't new at all. It is exactly the same design statement as the highly parallel vector processing Crays (and to some extent Alphas) of yesteryear, where there is no compromise on performance. This is a good thing if you want to max bang per buck. However Sony have severely limited the amount of cache available to each VPU. And developers are already complaining that this severely limits the real-world benches (theoretical vector throughput means nothing really .. but marketing like to brag these numbers when the real-world ones are so poor or unknown, as is the case for new architectures like Cell). Sony did give a really impressive demo of the Cell decoding a whole load of HDTV streams .. but this is a limited and repetitive set of tasks .. I'm more interested in games/graphics engines than movies! They gave a whole load of physics demos (the zillions of ducks in a bath tub), but it's nothing a dual-core PC or PC with a PPU couldn't do already. It doesn't seem to be that amazing to me.

     

    As for the linking multiple Cells together .. you need a very fast network connection to sensibly packetise work for distributed Cells because of the latency issue in the case of a real-time game.

     

    I've been looking at the discussions online from various developers who are using the Cell developer kit, and the Xbox 360 Apple-Mac dev kit .. and both seem to be pretty disappointed by the CPU performance. Summarising, they were hoping for enough CPU power to enable a giant leap in AI and physics ... but it turns out that nonlinear ordering of code isn't so easy on either platform ... it's difficult on PCs too BTW, so this isn't an anti console statement. It's just that Sony seems to think they can create a complex piece of hardware and expect developers to instantly make decent use out of it ... whilst MS do all they can to make it as easy as possible to develop for without losing any performance ... this generation will take a long time to make use of the CPU power.

     

    MS's tools look much better .. actually it's pretty impressive. They help you port PC code to the XB360 and vice versa. The next Windows OS (Vista) will even have Xbox-Live and the USB2 Xbox-360 controller (BTW, I too love the PS2 controller .. but I think the PS3 controller looks far worse)! And they are going to encourage developers to develop/port games both ways (more than they had with XB1).

     

    On the bright side, graphically they'll both look great .. they're pretty much equal really, Sony using NV and MS using ATI. I'm really looking forward to the next-gen Resident Evil, and even nextgen Sonic .. but I wouldn't buy either console just for those. You probably figured that I prefer PC games ... not just the usual FPSs, but I even like combat flight sims like LOMAC and I prefer racing sims like the PC's GTR to TOCA/GT4/Sega (although I still love to play the latter for the better graphics & more arcadey thrills)... so may be it's best that you guys don't really listen to my views, I think I'm a very different sort of gamer to the "norm"!

     

    BTW, did you guys notice that the latest Sega arcade board ("Lindbergh") is a PC (http://www.gamespot.com/articles/sega-unveils-lindbergh-games/1100-6132425/) with an Intel CPU and Nvidia GPU .. but they have DRM hardware so that you can't simply run arcade games on your PC .. yet!! I don't think the VF game looks that much better than VF4-evo, but the Sonic video looked pretty amazing. Basically if you remember the prerendered stuff on the Dreamcast game, it kinda looks like that (but is in real-time).


  19. Please help me choosing. I need to change my hardware. I have to decide between two scanners :

    - Epson Perfection 2580 PHOTO

    - Epson Perfection 2480 PHOTO

              Does somebody know both of them ? They look rather similar, but there is a significant difference in price : 20 dollars. I would like to know if the difference between them is worth the 20% increase of price.

              I want to scan photos and print them on photo paper, that's why I want this kind of scanners.

              If you knowe these models, please tell me.

    Regards

                  Yordan

    1064321915[/snapback]


    Hi Yordan,

    Physically they look very similar, if not near identical, but there are some big differences, actually.

    The 2580's auto film loader (similar to auto sheet-feeders in photocopiers .. very handy for saving time potentially wasted) that's built into the lid .. trivial but may be less annoying when dealing with loads of photos to scan.

    This is in addition to the manual 35mm film slider scanner that both have.

    PCW said "People with large film archives will like this scanner, which is the first flatbed model we've tested with an automatic film loader" (http://www.pcworld.com/article/119449/article.html) .. and it came number1 in their top-10 scanners list (http://forums.xisto.com/no_longer_exists/), with the 2480 coming in 3rd.

    I think this is the main reason for the extra cost .. if you do a lot of 35mm scanning, then it's well worth it (most people keep their original 35mm films with the photos so it's worth it in this case).

    What it means is that you can place a stack of films in the scanner, press the button, go away and make a coffee, and then come back and it's all been scanned to a bunch of predetermined or incrementally increasing numbered digital files .. all automatically!

    On the 2480, you'd have to take each set of 3 films and scan them in, then get another 3 and scan those in etc .. you have to be at the scanner feeding them in manually, which gets to be a real pain after a few days!

     

    The optical resolution of the CCD of both are 2400x4800 (which should be good enough for pretty much all non-pro/archival tasks .. 300dpi horizontally & 400dpi vertically). Ignore the other figure (12800x12800) because this is just an interpolation value that's done in software.

    The 2580's CCD has a colour depth of 48-bit (281,474,976,710,656 = 281 trillion colours), vs the 2480's 42-bit. Its sensors can "see" (distinguish) more colours/shades in the photo you scan, which may be important if you want to scan high-quality colourful photos .. specifically ones with many shades (e.g. human faces that are in sunlight and shade simultaneously .. i.e. varying shade across the face).

    Again that PCW page states "In our standard image-quality tests, the 2580 won first place among all small-office scanners".

     

    Both have a USB2.0 interface (upto 480Mbps), but the 2480 is still a lot slower at scanning.

    In http://forums.xisto.com/no_longer_exists/ & http://forums.xisto.com/no_longer_exists/ PCW states that:

    - the 2580 has better greyscale bit-depth (the number of shades it can distinguish as I mentioned above),

    - the 2580 has "outstanding" vs "very good" performance regarding time per scan

    2580:

    Average 1200-dpi color scan speed 28 seconds per document

    Average 300-dpi monochrome scan speed 21 seconds per document

    2480:

    Average 1200-dpi color scan speed 31 seconds per document

    Average 300-dpi monochrome scan speed 23 seconds per document

    It doesn't sound like much of a difference for the full-scan times .. but it all adds up if you are doing many scans regularly.

    - but it has a smaller transparency adapter (1x1.5in vs 4x1.5in) .. which is weird .. but like I said I don't think it's that important, at least, not to me!

    Also note that along with the actual scan, there is a time need to send the image over USB2 to the PC. Don't ever get a USB1 scanner (they still sell them .. not the ones you list BTW).

    I have used a USB1.0 scanner before, and when scanning at high-res, the scanner had to pause whilst the PC grabbed that part of the image .. very very frustrating!

     

    The PCW reviews state "The 2580's impressive performance earned it the second-highest ranking in our March 2005 issue among small-office scanners for overall speed. It pumped out a 2-by-2-inch color photo at 1200 dpi in just under 28 seconds--the fastest score of all our currently tested scanners."

    and the 2480 "scanned a 4-by-5-inch black-and-white photo at 600 dpi in just under 26 seconds; most other SOHO scanners took from about 28 to 41 seconds to complete the same scan"

     

    The only other difference is that the 2480 comes with a transparency adaptor (along with the common 35mm film adaptor) but that's not really that important (I just cover a transparency with a white background sheet and they scan fine on my scanner .. also an Epson!).

    They both officially support Win98/ME/2k/XP and MacOSX, but only the 2580 supports MacOS9 officially.

    Dimensions are the same, but for weight, the 2580 is 0.6lbs heavier at 6.6lbs.

    Both come with a 1 year warranty.

     

    In summary, get the 2580 if you want the faster scanner with ADF-like auto film feeder and better colour (greyscale shades) depth! Otherwise save your money and get the 2480. Personally I'd go with the 2580 from your brief description, esp if you're going to do scan on a daily/weekly basis.

     

    Hope this helps you choose the right one for you. Feel free to ask me to explain in more detail if you need it (I've been a little terse & used a few too many acronyms .. sorry!),

    Kam.


  20. I use a Seagate SCSI drive for my main OS, apps and VM images of other OSs, several (it varies .. they're removable!) WD SATA drives for my data and bootable Rev cartridges for some OS images that I prefer to run outside of my VM app (eWinXP/bartPE or OpenSolaris LiveCD when using some DCC/CAD software .. rather than multi-booting via a bootloader). The former is pretty cool because you can have several "versions" of your OS, similar to Window's backup's rollback, except that it actually keeps writes physically separate, so you can install the main image on a write protected HDD or CDR .. or Rev cartridge!!For other stuff, I have approx 1000 CDRs (finally moving to DVDRs now B)) for things like incremental backup images and videos/movies/etc .. basically things that I don't need online/direct access to all the time! I also sometimes use CDMRWs (Mnt. Rainier) for installing some weird apps that I know I won't use ever again (e.g. just for a piece of work I have to do right now, but once I've done that, I'd probably never need to use it again), or for things that are OK to have really slow access to (for file access across my VPN(SSH)/WAN(DC) connections etc)! Ideally I'd have several optical drives for these tasks but I'll probably wait for BRD before I get a new drive unless I find one for <25bucks! Finally, I sometimes use a microdrive in an external USB-CF reader to move files between remote machines or to old machines that don't have much other than a USB port and floppy drive :blink:.I don't think never knowing where my data is is ever a problem for me. I use a directory lister (there are some pretty good shareware ones on tucows etc) to keep track of my less important files, and diff to see changes. Next year when WinFS beta arrives on NT, and the equivalents on Linux etc, it will probably be less of an issue .. assuming users are happy to spend hours tagging all their files with tons of useful metadata ...!Most modern drives use FDB, and I think watercooling is quite a nice method of dealing with my noise problems (rather than soundproofing/heating :P or setting the drive into one of its low-performance silent modes). And the Rev cartridges reduce my potential electricity bill as I don't need to use many OS's at once!RAID0/5 are nice for handling the throughput needed for AV editing setups & server tasks, but StorageReview.com's forums have some benchmarks discussing and showing that the advantage on desktops (single-threaded, single-used apps) is tiny. So I figured I'd setup the WD's in a simple JBOD instead.On a slightly off-topic issue, if you really want better access times, perhaps a software ramdrive/cache or hardware SSD (like Gigabyte's 100USD SSD, or the Hyperdrive-III if it's out yet) would help you .. rather than a larger SCSI controller cache. I remember reading a reviewer who had setup their OSs on one of these bootable SSD's, and got nanosecond (memory) access times and high throughput (the HD3 benchmark maxed out the PCI bus it was on at nearly 133MB/s!). Some of these drives have separate power lines, internal batteries, and backup HDDs too. But to me, the cost is still far too high (for RAM), you'd be using memory on a relatively slow PCI/PCIx/PCIe bus, the OS is usually loaded into memory anyway which is faster (although in Linux you can set this to be reduced and to load it off an SSD or FlashRAM drive) ...SCSI has lower CPU utilisation (hence the "smoothness" ebbinger_413 described) and has the equivalent of NCQ (TCQ) but with larger queues & more complex caching algorithms .. I agree it does feels faster .. but costs much more :blink:.As for partitions vs separate drives, the outer part of an HDD is moving faster "linearly" than the inner tracks, and so has better throughput. You can, to some extent control where data goes on the disk, in some OS's. But perhaps the separate disks benching faster had something to do with this? I have no idea! Personally, I've never seen this noticably faster situation (I've used a similar setup at work many years ago!). It can't be anything to do with disk/controller caches at that size (GB's).Regarding gbE networked vs SCSI/IDE local drives, if you populate the SCSI channel with four 15k drives at 100MB/s each, that's well over gbE (unless you have a switch with many ports in it), and SAS now has 3gbps per port, with 8port .. all the way to 32port controller cards planned .. and you can connect multiple drives to each port using some extra hardware. There are also external ports for SAS and SATA (eSATA), so personally if I had that many disks (my case is too damn full to fit those in :blink:) then I'd buy an external drive cage with removable trays (I think Addonics had some nice ones for 50usd).BW, ebbinger_413, have you thought about using bootable Rev disks (since you're changing your setup daily!)? You can get 5 disks for about 100gbp on ebay, and the drive itself sometimes goes for 125gbp for the USB external one. A word of caution though is that you need to use Windows or LiveCDs for the bootable setup (using Iomega's boot-and-run in the former's case).

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.