Jump to content
xisto Community

kam1405241509

Members
  • Content Count

    70
  • Joined

  • Last visited

Everything posted by kam1405241509

  1. I assume you removed the bluetooth adaptor completely, or that it fell off (ie it didn't break in two, leaving the USB plug still plugged into it). Does the motherboard come with any onboard diagnostics (modern ones from Abit & Asus etc come with a nice LED that reads out error codes, so you can then look those up in the manual). Usually the error codes aren't of much detail, but they at least tell you where in the boot process it gets to (I had some boot problems & thought it was the board (having just removed/reinstalled it to add some fans), but then looked at the board codes as they went through the sequence & it stopped at the RAM check ... it was incompatible RAM!). I'm guessing he means he doesn't see his HDD read/write LED turn on.
  2. Hi Spartacus :-),I love Rome Total War too. Check out this roundup of cards (http://www.xbitlabs.com/articles/video/print/2004-27gpu2.html). It's quite old (no 7xxx's etc) but I doubt you're after a high-end expensive card ... esp since these things double in power every 6-12 months ... it's probably best to upgrade later (unless you have a specific game/app that needs the 7xxx's power ... or you could upgrade to PCI-e & SLI etc .. but that's still quite an expensive route as well). Anyway, the review includes benches for a few strategy games (Rome) & some FPSs (Doom3, HL2, Far Cry). If you just want to play the Rome & DoomIII at min performances (low-res, low effects), then just get a 5700 which will give you approx 20-23fps in both, otherwise a 6800 would be twice that performance&cost in Rome, and nearly quadruple that in Doom3 (because it has the hardware to handle complex shaders/lighting, & the fill rate etc ... and hi-poly characters/animation)! At the highest end in Doom3 (1600x1200, FSAA/AF, etc), the 5700 only manages 5fps whilst the 6800 can reach almost 39-49fps (54-90fps in SLI mode)! The 6800s have upto 16 pixel pipelines (62xx was released with reduced functions to sell cheaply etc .. I think they had 12), 6 vertex units, Shader Model 3.0 ... upto 7200Mpixels/s fill rate (same for #texels per sec wrt 1 texture unit per pixel pipeline) ... and the memory bus reaches 35GB/s (128-256MB)! 5700's are 75USD, and 6800's (at newegg) are 150USD :-). So it all really depends on whether that extra 75USD is worth it for you if you play the most recent games with complex shaders etc. Finally, personally I prefer NV for their drivers (very good Linux, OGL & multimonitor functons ... see the above link plus this one http://forums.xisto.com/no_longer_exists/), but ATI has a strong following too due to their focus on efficiency & low-heat/noise etc rather than brute force.
  3. The 64-bit Far Cry patch is the only damn good commercial homeuser demonstration of what can really be done given 64-bit processing & memory addressing. The draw-distance can increase (scene can be drawn out further into the distance), more objects can appear onscreen simultaneously, higher resolution textures (this makes a big difference & I think most games developers say they are going down this route soon .. ie in 2006). Wrt games/graphics I really like the look of HDR & some of the newer shader ideas, but that's going slightly off topic ;-).
  4. I think the main problem with x64 is that many hardware makers haven't bothered to release 64-bit drivers at all, or at best the drivers are not very stable. I don't think the OS itself is to blame. Also I don't think MS takes it that seriously & knows full well that this problem is due to vendors' resources being focused on Vista instead which will have a longer lifespan (and therefore worth the effort/time/money spent on developing Vista specific 64-bit drivers instead). Again though I don't know anything for sure since I haven't seen the internals!
  5. Vista brings with it a GPU accelerated GUI (basically since OSX had Aqua .. Linux is also getting equivalent setups but it'll probably be minimalistic/efficient :-)). Eventually it'll bring with it the SQL-server-lite-enabled WinFS ... not sure I'd want a database running 24/7 behind the scenes, unless it doesn't do much beyond updates & tree restructuring etc until it is needed to find something. Current realistic specs are bidirectionalPCIeX16/GPU/256MB, 2CPU(it's more threaded internally)/2GB, SATA/NCQ ... and of course an HDCP HD-LCD to view BRDs/HDTV/etc (legal) content! I agree, but it depends on the user. I don't want my OS doing much since for me it's just there to manage/run all my apps efficiently. For a laptop, especially, wrt one that's trying to save power, running a DB on a 2nd CPU is excessively wasteful! I too don't want my OS dictating what I can & cannot view/execute .. I will certainly not have my main PC running any protected content .. There are cracked hardware out already wrt HDCP to DVI, but the standard allows for the blocking of specific peripherals & they can change the encryption codes etc at any time. I'm sure that eventually there will be work arounds .. or people will just get all their content from non-legit sources online instead .. in the same way that many people avoid iTunes files simply because they do not want any restrictions, so they burn their CDs & compress to an open format! Overall I think there's very little benefit for far too much cost with Vista. I'd use OSX if I wanted something pretty & with a database (and secureness), or Linux if I'd want something efficient that just works & does the job (and can look very nice if setup well etc .. without being inefficient). And I still really like W2K more than XP because it works as well without any of the fluff! With HW virtualisation (esp in v2) there will be less/no reasons to run Windows as a main OS just so as to run a particular app(s) .. so then people will chose their main OS purely based on the features they need (or think they need). Finally, I think MS's logic is that they rarely produce a completely rearchitected OS (a lot of the stacks in Vista are completely rewritten or new), so they won't get a chance in the near future of taking advantage of all the new hardware changes that have/are taking place (multicore, 64-bit memory space, PCIe graphics, SATA-NCQ, etc), so they may as well focus this OS on the very latest or near-future (DDR3) hardware, and wait for the majority of users to catch up. Eventually all this will be common/cheap (even multicores on laptops now exists), and hopefully the continued focus on lowering power by intel (and also AMD to a lesser extent) may have some decent results soon :-). This is all conjecture/guesswork though until the final final build is released.
  6. The original roadmap stated Vista would be released to RTM in June & public launch in Oct 2006 .. it was never H1 that was just the later betas/RCs (http://forums.xisto.com/no_longer_exists/). Vista betas are getting delayed & now an MS exec stated the final's been delayed too(http://forums.xisto.com/no_longer_exists/).They're going to release as both 32-bit and 64-bit .. it'd be pretty dumb to sideline the majority of users (even if their experience would be pretty lame without serious hardware!). Yes the IA64 is a serious number of changes, but it's basically dead thanks to much higher prices & much poorer IA32 performance! Intel's other 64-bit CPU architecture is (now*) an exact copy of of AMD64 .. because MS said they will not support two versions of x86-64 CPUs just because Intel didn't like the fact that they didn't invent it & because they wanted to cheat and use their dominance/power rather than compete fairly (AMD filed a recent lawsuit claiming unfair lack of competition from intel in the supply chain)! * Intel originally released an incomplete version of AMD64 (of course they called it an Intel acronym) ... Intel had less not more x86-64 intructions initially! Now, although x86-64 is a simpler change than IA64 (obviously .. as it's a bunch of extra instructions not a whole new ISA), it's not simply a matter of widening the eight GPRs etc (64-bit addresses to be able to address >4GB's without a broken Xeon memory bus extension). AMD doubled the eight GPRs & eight SIMD (SSE/2/3) registers, and the rest is const (eight x87 FPRs & the extra 120 hidden/internal registers for register renaming. Finally, AMD's K8 ISA isn't simply just 64-bit-ness. They also took the leap to HyperTransport CPU&PCIe interconnects ... and the integrated memory controller which helped scalability without losing/sharing performance .. these two only really come into their own at the high end though (2/4-way SLI, massive peripheral bandwidth etc for the former ... and SMP/ccNUMA for the latter).
  7. It may be possible to free up some space in that small laptop (by removing extra internal CDD/HDDs/batts you don't need when it's off) ... then embed/hide some phone/GPS chips in it so you can track it like they do for stolen cars .. no biggie. If you want more details/howtos ask away.
  8. I agree with Sarah81, WD SATA/PATA drives are very nice. They are the only 10k RPM SATA drives with 3-5year warranties on their top end drives, so they believe their drives are reliable otherwise they'd be losing money ;-). Ditto I'd agree that you need to get as much HDD space as possible if you're gonna store tons of videos on the HDDs. However if you are just aiming at max performance on a few AVs then SCSI would be best. Maybe even a combination of the two if you're loaded! I'll probably be selling a bunch of WD250GB SATA's on ebay soon, and I took a look at the average price a few weeks ago (to guage what price I should sell the at) and I think it was around 80GBP for new ones. IBM have some 400GB drives out, and several other manuf's will be doing likewise (right now just a bunch of stoopeed paper launches which are now stoopeedly common in this supposedly-fast-developing but-more-like-fast-lame-@r$-marketing industry :-((). Sorry for the rant, just remembered how long I've been waiting for larger drives promised nearly a year ago .. so dumb to wait for things that aren't out yet when they keep changing their deadlines all the time! With AV files, 16MB caches were popular on SCSI drives & are now common on high-end SATA/PATA ones ... not sure it'd be that big a deal when streaming a giant AV into RAM, but obviously no harm if it's the same price :-).
  9. Hi qwijibow,there are many different speeds of IDE, SATA, SCSI ... pSCSI runs up to dual channel U320 (320MB/sec) and 640 is on the way soon along with serially-attached etc. SATA is currently at 150MB/sec and soon will be at 300MB/sec which is still slower than good old U320 SCSI! If money is not an issue, a SCSI RAID array is the way to go for NLE video editing apps, no doubt about it. However, SCSI drives and controllers are more expensive than their SATA/PATA counterparts, due to the extra complexity in the protocol (it's made for multiple devs accessing things simultaneously & is built with large queues & respective algorithms for this). The algorithms are not all aimed at server tasks, since AV is a popular use of SCSI drives, so some desktop alg's are also defined by the big disk vendors. Note you'll not get as much GB's for your GBPs if you go this route, but you'll get higher performance using fewer drives, so if this is what you value go for it.
  10. Hence RAID5!JBOD is just a bunch of disks, RAID0 is striping, RAID1 is mirroring, RAID2 is bit-level rather than block-level striping and is no longer popular (mem to choose a large block size in RAID0 if you are storing/manipulating large AV files), RAID3 is byte-level stripes with a dedicated parity disk, RAID4 ditto but block level, RAID5 is block-level striping with parity data distributed across all the disks is the array & it is currently the most popular method. When having large numbers of disks, RAID6 which is ditto but with the parity data stored at two locations so it can survive two disk failures! Then there are all kinds of nested RAID levels like RAID0+1 (two HDDs in RAID0 stripe, with another two disks as a RAID1 mirror of the original), and RAID10/1+0 which does a similar but opposite setup (stripe across drives that are locally mirrored), RAID50/5+0
  11. 2. Oh I see you mean power consumption, doh, sorry. Well you could always power it up with a cheap 2nd PSU or even an external one (I use a laptop-style PSU for my ext USB convertor etc!).Also I forgot to mention that NCQ (& some other neat features) is NOT on all SATA drives, or even on SATAII drives, which don't even have to be at 300MB/s either `... there's a whole load of can-o-worms on this! I think anand or ace had a good explanation a few months back. Basically SATAII are a set of recommendations, not a standard or anything .. I think it is a new name for the working group too. SATA-300 etc basically states the speed, and the manuf's must state the features separately.
  12. Hi sparx, Yep, no probs. If you have the appropriate PATA & SATA controllers (either onboard or on PCI) it's just a matter of pluggin them all in. The OS sees them as normal drives .. except Linux which treats SATA drives as SCSI ones, but that's another matter! Nope, often SATA drives have both the old Molex PATA style power connectors, along with the new thin SATA power ones too :-). Even if there's only the SATA ones, convertion is simple (either DIY or I think I've seen many dirt cheap convertors online) but that's probably not necessary at all anyway! It depends on the drive's model (I've seen a review of a PATA drive beating out high-end SATA ones in certain benches!) wrt how the cache algorithms are defined & how much cache etc ... and also on the usage model wrt if it's a server then maybe the larger queue algorithms of SATA may be handy. For desktops, the NCQ queue algorithms may be handy in getting the right sequence of data off the disk in terms of what's closest to the head at the moment etc. It's best to check out & compare individual models. WD raptors are SATA only and are rated as the highest performing desktop drives ... but I don't have access to one so I couldn't say for sure ;-). Also, if depends on what you are going to do (video needs high bandwidth, whereas gaming often needs fast access times along with a reasonable amount of bandwidth, etc etc).
  13. Well the ink on the paper towels seems to me like a good thing .. at least it seems the cartridges themselves are OK and aren't dried up or something. My old-ish Epson C44UX inkjet was the same. I tried the old safety-pin/hole jobbie, and I also tried to squeeze the carts to pour a LITTLE bit of ink into where the cart's hole met a small protrusion (that pierced the carts) within the head-part ... but I remember it took ages to get the thing printing properly again .. and I also remember it was very messy & frustrating (to the point where I vowed never to use it again .. and moved to a cheap HP laser which had more paper jams than any printer I've ever dealt with .. but that's cuz I was desparate to do manual duplex using el-cheapo thin/lightweight sheets!!). Damn, sorry for the divergence/rant.
  14. Hi golgothurteen, they mention: "The Lumenlab DIY 15" Projector is a REAL projector with a high quality XGA LCD panel, a powerful Metal Halide lamp and REAL projection optics. The so-called projection TV kits are simply a cheap fresnel lens that you put in front of your television; an image is projected, but not a watchable one". It's definitely not worth paying for "their" info ... I've seen drawings/schematics for these types on some forums a few years ago. I've seen the Fresnel+TV kits on an online TV tech show, and the "quality" was worthless. These guys say they use proper optics etc. What they are doing is basically projecting a bright lamp through a normal LCD (rather than one on a chip) and then using the lens to focus the image crisply on a wall/sheet. I remember the forum thread I read mentioning problems of heat being the main issue .. and one guy built a bunch of fans in his. All of these sites sell "howto's" that should really be free .. and they are just there to then sell their own wares! Other options I read about include the use of a cheap OHP with an LCD display, and both transmissive & reflective LCDs ... Kam.
  15. Yeah, I've seen loads of these, both DIY types & premade, and there are tons of different features available too.The most basic would be:1. DIY drill/tap acrylic to mount HDD from underneath2. mount rear sockets with cheap IDE to USB (or eSATA)More common/advanced would allow both internal (p/s-ATA cage) & external (USB or eSATA) mounting.Most advanced ones have autonomous RAID1 backup etc .. but I can't see the point of those, much!There are also ext cases for 2.5" laptop drives with tiny USB & power interfaces ... had a sturdy metal case too. Not sure if it's that easy to DIY when you get to smaller drives.
  16. Hi WeaponX, sorry for the late reply again, and sorry for the long message previously .. one of my bad habits :-(. OK, to sum up, PATA=ATA=old, SATA=new. But either is just fine to me, since as you can see in that review I posted, this drive is damn good & competes well with all the new SATA drives too. So to me there's not really an issue between choosing one or the other. I'd actually prefer PATA to use in older PCs, or SATA if you have a specific need for some feature. PATA is actually ATA, and it's the older parallel interface that runs at upto 133MB/s. SATA is the newer serial interface standards that run from 150MB/s upwards (SATAII is a little consuding to some extent since it's a lot of marketing just to state that the drive has certain features that improve how data gets read). I think you can buy convertors to go between the two, since the protocol is still pretty much the same, just serial or parallel, and hence all the same software drivers pretty much work as is (unless you want to enable SATA/II specific features). If you have any more questions please write them here, I'm sure there must be other people who are also confused, so your questions will hopefully make me write better answers :-). Also, I will reply no matter what the question, though I can sometimes go AWOL now and then due to work deadlines, sorry :-).
  17. To me PC's are getting much cheaper than they used to be since they've become popular with the masses a decade ago. It used to cost 10k just to get a crap PC that'd only be useful for work (about 20 years ago), then that dropped to about 3k for a machine that could play games but not as well as dedicated consoles (15 years ago), to machines that cost 2k but played games well thanks to 3dfx (about 10 years ago) .. nowadays you CAN pick up a gaming machine for 500GBP if you make many compromises & don't go for the high-end all the time. As for Macs, yes I must admit I do love OSX, I think it's the best of the current-gen desktop OSs no doubt about it :-). But I've got a lot of x86 apps that I need to run, so for me I'm waiting for the x86 versions .. hopefully they'll be as good as their recent Macs (high-end & not that much more expensive to PCs). But I'm not really into console games (beat-em-ups, non-FPS shoot-em-ups, platformers etc). I prefer flight sims & FPSs & adventures, and PCs are better for these genres at the moment. There's no technical reason why consoles couldn't run these types of games (xbox1 was basically a PC!), but they don't aim at these genres, and even on PCs flight/adv/etc are dying breeds, since most people don't like them :-(. I have absolutely nothing against consoles per se, but the next-gen consoles don't look that appealing to me compared to an easily upgradeable PC with a fast CPU/GPU/PPU. There's no way PCs can compete with consoles in hardware costs, since MS/Sony discount the hardware (MS lost billions on XB1) and make up the cash on selling a licence per game sold etc. But if you wait a year or so after the next-gen systems launch you could get a better specd PC at a bit more cost that will play games aimed at those consoles for a few years! But for me, a console just doesn't replace my PC gaming needs .. I wish it did, but I doubt it ever will! I don't think there's any need to spend huge amounts of money just to play games on PCs unless you want to continuously stay on the cutting edge for some reason, and maybe that'd only be necessary if you were competing in tournaments professionally/commercially or something like that, if you know what I mean. I can understand why your friend's game dev co wants a cutting edge machine at the time of game release so they can make sure that's where they aim their top specs at (it's a bit pointless trying to sell a game no one can play yet, and that by the time they can it'd have been superceded ;-)). But they are doing it to make money, not for playing games per se. Personally, I'd only spend a lot of money on a PC IFF there was a financial payback from it (either now or in the near future), but that's just me .. I'd feel too guilty trying to justify a high-price just for gaming, unless an insurance company was paying for it for me of course ;-). Er, heh, I thought I'd seen it all ... now there's a famous gaming company trying to sell a PC the price of a small car ... that's just nuts to me. I'd never spend that much (in GBP) on a car to go from A to B, let alone a PC just to play games & program code/type docs etc!! Large screens at those response times are usually low-res ones. That kinda defeats the purpose of a monitor to me .. that's more like a 1080p TV, so it should cost so much more than TVs normally cost on average ;-). They say it's a Samsung with a pathetic 1366x768 (not even 1080p!) resolution yet costing 9000USD!!! You'd have to be demented to pay that much for a screen that'd be useless for watching HD movies or tv in the near future! 500USD just for cosmetics ... I'd rather go down to my local garage and get them to do a metallic paint job (sure non of this angle colour changing stuff .. but is it worth 500USD of your own money when that's the price of a half decent CPU?!). Just doesn't make sense to me ... unless I was a rich kid & it was my parents' money & I had no idea of the value of money ;-). No harm in dreaming though! 2000USD loading it up with 2TB of HDDs ... again there are much better ways of spending money than on loads of HDDs, and you can always simply add drives when you run out of space at which point those prices would've dropped .. so to me I'd rather start with just 1 or 2 & try to use cheap optical for bulk files like videos. Basically, there are ways to have a decent systems for most peoples' needs whilst not breaking the bank, but it involves being ultra conservative and minimalistic .. and doing lots of small minor upgrades but only when necessary .. rather than aiming high at the start! Again it's my own opinions .. just my 2c's wrt what I've found myself from making some dumb mistakes along the way in buying at the high-end once or twice for a few components and usually regretting it! I've not regretting spending on decent monitors since they're worth it wrt you can use them for a decade, but a high-end graphics card just for 1 game definitely isn't to me, if you can manage to hold out on buying until the next versions come out so you can then buy the previous gen at half the initial price ;-). Again, if you're using them for work, then it's a totally different issue!
  18. A commercial rendering app normally involves only the CPU/RAM, yes, but still there are a few different options available: 1. Expensive opteron ccNUMA machine 2. Cheap network cluster .. setup network rendering in your apps (to use those 12 PCs when they're not being used ;-)). 3. Not quite there yet, both the hardware & the software, but it's worth keeping an eye out on these .. several recent academic ray tracers make use of programmable GPUs that support certain shader languages .. but I found they weren't any faster than a fast x86 though! 4. Expensive custom hardware .. there are also commercial rendering hardware accelerators (e.g. from ART in the UK). These are PCI-X cards with 8 of their AR350 processors, and you need to recompile code to use their libs though they did write a RenderMan interface & 3dsMax/Maya plug-ins :-)) ... literally ONE PCI-X can shrink rendering times from HOURS to MINUTES. Their hardware can accelerate ray-tracing, radiosity & more :-). I still think it's nonsense that intel is faster than AMD CPUs based on apps being optimised that way .. I compile code using INTEL's compiler for my AMD CPU .. and it's faster than GNU compilers when you set SSE3 optimisation flags etc! AMD's high-end architecture allows you to scale the number of CPUs & the RAM. This IS important in rendering .. but would cost boatloads of cash :-(. It allows for 25GB/sec bandwidth when running across 4 CPUs .. and with 4GB DIMMs that'd be 64GB of system RAM!! But that's not relevant here due to the ridiculous cost of it all .. at the moment ;-). I agree though ... there's far too much playground antics amongst users & manufacturers/vendors! I couldn't care less whether intel or AMD were on top, but personally I've found that whenever I'm up for a new machine, that AMD seemed to be faster (having tried out both in my workplace with my benches, apps & code). Oh, and also the OS, drivers AND most importantly the apps must be compiled to support 64-bit ... this is why you didn't find any difference in games .. most commercial games/apps are still 32-bit .. I'd still recommend AMD for having a better 64-bit implementation (I read recently that Intel are finally going to support all the AMD64 instructions now) and multi-core design (crossbar-switch, integrated memory controller etc), so it's a better investment in the platform for when the software eventually catches up (unless you use a lot of opensource apps or your own apps ;-)). I'm sure Intel will catch up eventually .. though AMD has recently revealed its plans for the future too, and obviously they are going to try to keep ahead. Anyway, competition is good .. for both camps ;-).
  19. Hiya WeaponX,many printers are damn annoying these days. I'm guessing you've tried all the usual printer driver clean cycles etc? If not then, if you're in Windows, go to Control Panel -> Printers -> Printer Properties -> General Tab -> Printing Preferences -> Maintenance .. or something like that. Then click on print head/nozzle cleaning & print head alignment (I can't check a Lexmark driver .. but these are my Epson's terminology). Did it say "cartridge" or "message carrier stall .. cannot communicate with the printer .. error 0000 .. press power key"? At first I thought that sounds more like the cartridge's chip (if it is one of those :-o) isn't recognised .. maybe it's not an "official" cart/chip etc .. but then I read the pages below & it seems the printer doesn't like the position of the holder of the cartridges & is basically asking the user to move it to the leftmost position!? A bunch of people here (http://www.fixyourownprinter.com/forums/inkjet/22795) are getting that message, and the general consensus is to clean the contacts on the printer & cartridge. "The transparet is the most important thing to fix it. Using rubbing alcohol and paper towel clean the strip. this problem get started when the assembly at the carrier stop position get filled with the dropping ink over the period of time. This assembly can be cleaned by removeing it after getting aceess to it after removing two sides and backs of the printer. There are 2 screws in the back should be unscrewed and rest come sout after pulling the snaps. This is what I did to fix the jam issue and then emcountered the carrier stall becuase my dirty fingers touched the transparent encoder." I'm guessing their manual doesn't say how to take the damn thing apart .. but I'm sure customer services have this info, so maybe it's worth emailing them for a disassembly diagram .. but often these guys get worried & don't like to give out this kinda thing (warranty voids etc & the reason why they make it so annoying to take apart in the first place .. yet more nuisances) ;-). Still the guy above says it's just 2 screws & a bit of a tug! Also this page (http://forums.xisto.com/no_longer_exists/) might be helpful ... ""Carrier Stall" or "the cartridges will not move" issues Error: Carrier Stall To check the cartridge carrier: First unplug the printer. Move the cartridges to the left by hand and take them out. Try to print without the cartridges in the carrier and check to see if you still receive a "Carrier Stall" error message. If not, put the cartridges back in the carrier and snap them in at the back. The message should no longer appear. If the power light on the printer is not blinking, this error message may be caused by a communication error. In this case: Remove any other devices attached to the USB/LPT port and uninstall the Lexmark printer. Connect the printer directly to the PC and ensure that power is being supplied to the printer. Reinstall the printer. Error: Carrier Stall. Power light is blinking Causes: The cartridge carrier may have lost its location on the print path or there may be a communication error." So this seems to say there's nothing wrong with your carts, it's just the carrier .. I hope this is the case :-). hope that helps! I'm kinda busy at the moment, sorry, but I'll look into it a bit more tomorrow, if these ideas don't fix it. Kam.
  20. He he, Intel are number1 overall since they sold loads over the years. But look at recent/new CPU sales on a month-by-month basis and AMD is actually winning more often (recently)! AMD are so popular now they actually had recent stock problems in keeping up with the recent demand! Anyway, popularity isn't a very logical reasoning ... lemmings love to jump off cliffs, noone used to get fired for buying IBM but now their PCs are sold by the more efficient Chinese company, Lenovo, beat-em-ups were the most popular games in the charts but for longetivity you can't beat a good RPG, STDs/AIDS are very 'popular' in Africa, slavery used to be very popular, etc ... you get the idea!! They may have more products, but if they are all underperforming, who cares! The world's first/only board with FOUR PCI-Express-Graphics slots is an AMD/NV based one ... intel boards definitely don't have more features these days, at least not the features important to me .. performance!! Safest choice .. mm .. I'd feel pretty depressed if I'd bought an intel setup and then got to see a decent AMD setup .. big mistake if you are after FPU performance IMHO!
  21. Sorry I'm repeating again ... both AMD & Intel chips don't burn up since both have protection circuits and sensor diodes on the chip. Intel high-end CPUs run very hot due to the long-pipeline/high-clockrate design decision which intel have now backed down on and are going back to the shorter-pipeline design again next year. To avoid stability/heat problems on either AMD or Intel machines, it's very simple .. just read/remember some school-level thermodynamics chapters, follow common sense, buy some thermal sensors, buy/make decent cooling parts and make sure you know all the thermal specs/limits of all your components.. no big deal :-). I've also had problems, BTW, both happened to be in intel systems but that had NOTHING to do with it. They were all HDD failures due to avoidable dumbness/laziness on my part: 1. a long long time ago I asked an engineering friend stoopeedly presuming he'd know more than me (he did, but only the theory, so I should've just read everything necessary myself anyway, or found someone with previous practical experience on this specific issue!) to help with an HDD problem and he shorted it (still, at least I now know never to let any one else near my main machine ;-) .. DIY everything rather than delegating & then fixing the ensuing errors, it's much less stressful in the long term!) 2. the weather was getting hotter and I knew I didn't have enough fans, so I asked for some several times but eventually gave up asking, and er they never got ordered (another lesson in that if you want something done do it all yourself rather than wasting time asking people all the time .. other people will often forget to do trivial things like this .. but the resulting damage isn't trivial at all!). Then someone turned up the heating on what was a very hot day!! No biggie since I didn't lose any data, but it was such a waste of money etc (I hate wasting money/time/etc .. it's so silly, you know?) to sort it all out afterwards! I now do all my work on my own machines and have very decent cooling in it :-). At work, if I want a machine on 'their' (our!!) network, I have to give them admin rights to my machine, and in some cases must give up admin rights on it (some of my colleagues aren't allowed to install software on their machines .. a problem considering we develop software in a CS lab .. doh!!). 3. had another HDD failure from what turned out to be a totally crap HDD (dodgy uDrv)! I've never had problems with machines I directly control .. but unfortunately in a workplace 'they' usually don't like to give people full control unless you sign up for admin tasks etc! Best thing is to make sure you have all you main stuff (work or play etc) under your full control (my serious work machines are not on my workplace's LAN .. though I do need some data off the LAN for work I get that off another machine .. anyway!!), and make sure you learn enough to both set it up and maintain it wrt admin chores. Best to learn it when you take a holiday or on evenings/weekends etc. I know this sounds like a hassle to some people but once you've learnt the basics, you don't have to reread much new stuff & after a while it's very easy once you get used to the terminology etc. Also, once you design a decent setup, you don't have to do any extra work for it ... ideally you should do an automatic nightly incremental backup so then you'll never lose much even if there was some failure etc. If you do everything yourself, you have full control and can do anything you want .. the ideal system YOU want without having to beg for permissions or make silly compromises etc. The worst thing is being constrained by other people, or by greedy corporations, etc. Er, but that's another topic ;-). He he, you must be kidding, right? AMD completely domintates the high-end x86 field in performance. You'd go for AMD not intel if you follow the masses without understanding the underlying computer architectural issues or unless you really want intel for brand/support/etc .. ie. for non-performance reasons! https://tweakers.net/reviews/442/5/dual-xeon-dual-opteron-en-quad-opteron-serververgelijking-benchmark-details-en-apachebench-scores.html shows the NUMA/bandwidth advantages (that I mentioned earlier) in numbers. As you can see (use babelfish to translate, BTW, or just look at the figs) the dual opterons are about TWICE the performance of the dual Xeons in ApacheBench. It gets even more crazy at quad levels and beyond. AMD architecture is designed to scale for enterprise servers, Intel's current design isn't meant for this.
  22. I don't think it's that alone. I've met many people who are anti-this-or-that for no good reason at all, besides that it may be 'cool' to be outside the masses or something ,or perhaps they like bashing/moaning, I dunno! Personally I couldn't care less about which company my CPU is designed by, whether intel or AMD, I only care about which is the best performing for my tasks at my price range, and for that AMD has won me over since the Athon Tbird days .. though I don't claim any loyalty either way, it's simply from doing benches etc myself since I usually have access to both types at work/etc! He he, for word processing type things, my 586 is just fine, if I don't have a flight sim running in the background ;-). So I don't really care what CPU it is for that task, and don't base my decision on those benches, sorry. But I do run a lot of ray-tracers, so I am very interested in your views, cryptonx. Can you show me some benches, please (independent or your own, I don't mind .. just some figures will do plus some info on the test itself so I can do them here)? What apps do you run, or is it your own code, and if so what compilers/options are you using at the moment? I agree that AMD lagged in some benches like video encoding up until a few months ago, but now with SSE3, they seem to be matching of beating intel at every benchmark. Video isn't as important to me ... mainly I need brute computation power, and for me AMD seems the winner for many years now for an affordable desktop (ignoring damn expensive IA64 setups!). http://forums.xisto.com/no_longer_exists/ http://forums.xisto.com/no_longer_exists/ http://forums.xisto.com/no_longer_exists/ shows that AMD wins every productivity test, from office to photoshop, and every video encoding test, which is something Intel used to dominate thanks to SSE3 optimised apps. The test is basically A64FX vs P4EE .. gaming type chips .. but FXs are just unlocked/fast Opterons 1xx's! They also test the X2 procs which seem to do well also. That last URL is on 3D workstation apps, and Intel's EE proc seems to be not that different to AMD's FX (ignore the X2 60% lead in Cinebench .. I'll talk about that next!). http://www.sudhian.com/showdocs.cfm?aid=672&pid=2574 is an X2 review, but also includes Opteron 275 & Pentium D CPUs. In raytracing & rendering Cinebench tasks (the 1st two figs) when optimised for TWO threads (X2) it's 21% faster than the EE which is faster than the PentiumD, and when running FOUR threads (275) it's about twice as fast! And these aren't even AMD's best CPUs at the moment!! Finally, sorry I repeat this, but Intel's 64-bit implementation sucks at the moment, and their dual-cores are only for their desktop CPUs, not their rendering/workstation/pro CPUs ... so to me AMD is miles ahead in the pro field of CPUs ... and also in the gaming/home field. Dual-core is obviously important in rendering (as shown in the URLs above) since it's an inherently parallel computation.
  23. Hi, again, Jedipi,I should've mentioned that if you want to use pro 3D apps like the various CAD variants, Maya, etc .. it MAY be worth spending extra bucks on a pro card that has lower performance in games (though often they have more VRAM, higher bandwidth etc ..). But you could get most of the way there by doing some DIY mods to enable use of the pro drivers, and could overclock to get the extra bps. The only issue is memory & resolution (although hardware clipping, accurate line drawing/anti-aliasing etc might not be enabled in all hacks!). This is probably gonna be less of an issue what with the new unreleased ATI X1800s etc .. I'm sure Nvidia will answer them back ;-). I can explain in more detail if you want .. just ask ... or take a look at the pro reviews online.
  24. This is my summary of how I view AMD vs Intel, currently ...It used to be said that AMD chips don't do as well as intel for multimedia, but now Intel are stuck because of thermal problems & can't raise frequencies anymore. The most recent AMD reviews show AMD winning in all tasks. They also support SSE3 now. I don't know any games that run better on intel. 64-bit Far Cry is a great example of what x86-64 can do .. an AMD innovation. AMD is definitely not the el-cheapo equivalent anymore, they are the leaders. Intel tried to force people to move to IA64, and had to emulate 32-bit x86 in hardware, an ugly/expensive solution compred to extending the architecture & adding registers etc.AMD also designed this generation for dual-core operation, unlike the underperforming hyperthreading (which was only developed to cover up lacking performance of the deeply pipelined netburst architecture of the P4). Modern AMD K8 CPUs have integrated memory controllers, so there's lower latency, and allows you to use older/cheaper DDR1 memory! It also enabled HT to be used to link processors together, each with their own dedicated dual-channel memory buses, so it could scale indefinitely until, of course, board makers reached their maximum possible density of the physical limitations for placing tracks on the n-layer motherboard. No more "add more procs reduces per proc bandwidth in the shared FSB" problem anymore :-). This was only on mainframes/supercomps until AMD decided to take a leap of faith and to just go for it :-)). And now AMD pro CPUs are finally the recent #1 on a month-by-month basis.I'd say Intel aim at the masses & go for what sounds great (marketing), but is the cheapest solution (so they can sell to the masses). This is what they want to do, they want volume & think huge profits will come that way. But there's only so much far behind you can be before customers decide to ignore those issues. So Intel decided to focus on the platform (which they are really strong in & can beat AMD on because they are so big .. but then AMD partnered with chipset vendors who'd been screwed by intel domintating their space!!). Now Intel are forced to not only accept x86-64 over IA64 for the masses, but also to focus on performance per Watt wrt TDP as AMD have been for all time! AMD don't care about marketing (and have quite bad marketing execs it would seem given their 2nd place .. though that's finally changing!) and they mainly care about what's best for the customer (which is the ideal that I learnt in marketing class bizarrely .. but never followed & often negated!!). They go for the best solution, no matter the cost & are willing to take HUGE risks/gambles if they believe their design is correct.
  25. I dunno. AMD is a great gaming chip, but I wouldn't say Intel is a good business-type multi-tasking CPU compared to AMD, when AMD has the best dual-core implementation by far, and hyperthreading is next to useless in comparison (most people/vendors disable it by default since in some tasks it's actually slower .. it was pretty good at multimedia apps a while ago but thing's have changed now with AMD dual-core being widely available [if only the huge demand didn't outstrip AMD's supply so much .. but I'm sure they're working hard on that .. it was just much larger than anticipated by all those marketing/market research firms ;-)] in many market segments .. it was just a temporary fix to the unique problems of the NetBurst architecture).
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.