Jump to content
xisto Community

kam1405241509

Members
  • Content Count

    70
  • Joined

  • Last visited

Posts posted by kam1405241509


  1. Overall, AMD gives you more computing bang for your buck.  That said, there are important architextural differences that may, or may not, be relevant for what you want to do.  If its gaming, go AMD.  If its business apps, go Intel for hyperthreading, because that can make a noticeable difference.  If its lower power/noise, go intel and get a Pentium-M and a desktop chipset adapter.

    1064316198[/snapback]


    Majestic, if it's multithreading you want, go AMD dual core, not Intel Hyperthreading! AMD dual-cores run at 69degC TDP, whereas the P4D/EE were something like 130 (175 when someone overclocked .. making watercooling a must in this case ;-)) at the max/highest end ... running at high frequencies is not the way to go if you want low heat/power!! Pentium-M's even in desktop mobo's don't even come close to competing with an Athlon 64 FX for gaming ⌠though they are great for things like Mini-ITX car-PCs & HTPCs, no doubt, but not for serious gaming enthusiasts, which is what the majority of home users (and probably Asta members I'm guessing!) are .. it's like comparing apples to oranges wrt different market segments, really!! With PowerNow (the AMD equivalent of SpeedStep) you get further efficiencies. Also, there are low-power Opterons available (the HE & EE chips) if you want to go really low power/heat for rack servers/clusters! There are also A64 mobile chips & they'll be the first with dual-core mobile if that's your thing!! For performance per Watt intel is nowhere near them & they know it .. hence they say they are realigning to aim at this for next year ... we'll see .. at least they're not denying everything and then only months before launch showing an about turn, like they did with the x86-64/IA64 issue! For now, I can't see any reason for going the Intel route for most tasks.

  2. ok...you guys are comparing AMD's and Intel's CPU in two different fields, gaming and business.

    I wonder to know, which is better for a studnet who is studenting computer graphic??

    3DMax, photoshop are used frequently, and writing program with OpgeGL.

    Would 64bit CPU have better preformance (compare to 32bit) when those kinds of software are used??

    1064316201[/snapback]


    Hey Jedipi, for DCC/CG (3DS etc) & GL coding etc .. AMD has better FPU performance & also SSE3 now, a better implementation of x86-64 (which allows a larger memory address space .. which will help you manipulate giant >4GB image/video files .. check out 64-bit Far Cry though for proof that it's worth going this way in real-time 3D apps .. incl HDR, drawing further depths, etc ;-)) ... the list is endless.

     

    I can't remember where this was, but I read a review on dual-core AMDs, and they found that certain PARTS (rendering) of the DCC/CG process was better accelerated by 2 threads than others (real-time viz) .. though both cases were pretty good (I think it was about 130% & 180% or something like that for most of these sorts of apps).

     

    As someone else said earlier, if you're into 3D apps you need a decent 3D card. Which one depends on the apps you use or value most etc. For GL you don't need a pro card since most gaming cards also support GL still (though with Vista there's a major issue wrt GL becomes a 2nd class lib under D3D .. seriously crap considering how many pro/academic apps are written in open OGL so they run on giant UNIX clusters .. I guess most academics etc have or are moving to OSX/OpenSolaris/Linux/etc anyway so it's not really an issue, but still, I can't see the point of the move esp with MS making SFU a priority on their servers & making Vista have many UNIX like features ... perhaps different people within MS have differing views so there's no homogeneity in the rather massive/complex whole OS :-((). Er, sorry for the rant/tangent again ... had to get it off my chest & hope others can shed some light on the issue ...


  3. I have an intel processor, a 2.4 ghz  :mellow:  and for so far, i haven't any problems with it, but is intel also better for programming in visual basic? or is amd just the best cpu there is?

    1064316239[/snapback]


    For VB/C/J programming, word processing etc, even my ancient 586 machine would be fine, depending on how bloated your apps are (I think VB requires an IDE & is a very visual way of developing GUIs, hence the name .. and if you use recent versions then you'd need a decent machine since this is the way MS codes, eg they had a famous flight sim hidden in a previous Excel version & their initial recommended spec for Longhorn/Vista was 6GB .. and then RAM prices shot up ;-))). You could use a version of VB that was released around the time of that 586 and obviously it'd be fine, but my guess is you're asking about the latest version of MS IDEs (vStudio), not all the junk I just wasted everyone's time on in saying (sorry for the tangent yet again!!). Even so, it's not that heavy when coding, which is simply typing some text & with the dynamic interpreter code trying to pre-empt what you type .. which is the only thing I like about IDEs ;-). Most UNIX coders use emacs, make etc .. rather than IDEs (though there are UNIX IDEs too of course), so you don't have to go down that path .. most MS coders I know use VS.

     

    Now, once you want to run that app you just developed, you have to compile it. There are distributed C compilers (no idea about if VS has this functionality) and at this point the more CPUs you have (either locally or on a LAN) the less time you have to wait ... so if you're compiling something big (if you work in a team etc, or use other static libs that must be compiled every time instead of linked dynamically [in my case I don't really do this anymore since I pretty much just work on my own code, and it's relatively puny cf pro dev apps ;-)], & have to compile other's code as well) may be it's worth going down this route ... which probably means AMD dual core chips.

     

    Another nicety is that you can test your apps for other OS's simultaneously using VMs, but perhaps that's less of an issue to you with VB & MS OS's ;-).


  4. stick to intel, i heard from my friend that amd crashes a lot, and it will shut it self down when its over heated. don't know if thats true but i'm sticking with intel

    1064316265[/snapback]


    I think you are referring to the old THG video where they killed an old AMD proc by removing the HSF for a while to simulate a fan failure. These issues are long gone. Intel were the first to have a decent thermal diode on the chip to help it shutdown ASAP, and AMD followed suite a year after that. Ditto for SpeedStep/PowerNow (though intel needed it more, like, ASAP!) & MMX/SSE1&2&3/PCIe/USB1&2 (intel-dominated or defined 'open' standards, so what do you expect .. but AMD matched them ASAP) .. both AMD & Intel are working on virtualization/DRM hardware too, probably with near simultaneous roll outs ... and AMD beat Intel to dual-core by months/years (depending on if you take the view of ignoring paper launches & looking at where it's most useful, ie at the high-end server range, as well as general desktop tasks etc). They never once said it was the wrong thing to do. Unlike Intel which said noone needed x86-64 (take a look at the Far Cry update, and a look at IA64 performance-per-Dollar ;-)). They are also taking their sweet time to produce an answer to large-scale AMD systems at the high-end ... AMD is smaller, but faster, IMHO, at the moment.

  5. and it will shut it self down when its over heated.

    This is a GOOD thing, better a shutdown that a burnout.

     

    His problem is with overheating.

    He wither did not fit the heat sink correctly, did not use themral paste, overclocked, or simply does not use adiquate cooling.

     

    WOW... dual core 4800+

    NICE !

     

    and i thought my Amd64 3400+ was fast !!

     

    id love to link afew of those in a cluster !

    1064316487[/snapback]

     

    Totally .. I think all the probs I've seen with failed discs etc were somehow related to not cooling properly (one I shortcircuited & it literally was on fire with smoke bellowing out of it, the other, well someone turned on the heating on the hottest day on the decade ... a day when tarmac at Heathrow had melted ... the AC's dial was labelled incorrectly, which I noted, but this person thought otherwise!). Now I'm a paranoid backup'r ;-).

     

    The 4800 rating assumes you can make serious use of those with 2 decent threads! There's been a number of people overclocking recent A64FX's to beyond 3GHz .. that's probably more useful to most people. The only software I know of that makes use of >2 threads is for engineering & rendering .. though yes I too wish I could persuade my lab to get a tightly clustered Infiniband HTX system, if it was a bit cheaper & if it didn't already spend money on a Xeon cluster last year ;-).


  6. It really doesn't matter much for any programming language, the compiler should adapt to the CPU used.

     

    They're totally NOT the same

     

    AMD's are waay cooler than Intels, while P4's are about 50/60 C in temperature, the AMD's are often near the 30/40's

     

    Also, it's a fact that amd's have better performance per megahertz.

    1064316544[/snapback]


    It's not usually automatic, you have to tell the compiler to optimise for SSE3 & any other options you like (like x86-64), and this all adds to the bulk/size of the binary output. Most apps I've seen still don't take advantage of SSE3, for example, though AMD now has this too, so there's no issue .. plus SSE3 was just a few extra instructions (about 12, I think .. can't remember exactly .. one of the things it had beyond SSE2 was some further speed ups for matrix calcs).

     

    K8 & P4 are totally different architectures. The biggest problem with P4 & the NetBurst architecture is it's deep pipeline (number of stages in a cycle) design, cf K8 which is more closer to the prev gen good x86 designs! The reason for going that route is to make it easier to clock at higher freqs, but the downer is much lower performance per clock cycle, and so they had to do stuff like SSE3 (SIMD) & parallel pipelines & intelligent compilers and branch predictors trying to guess what might be next so they can work ahead of time & keep all those pipelines full and busy ... unfortunately they found they couldn't do it well enough, there were too many branch mispredictions (equiv to cache misses in HDD algs etc). Still it was a really interesting ride from the theory point of view to see that this can't be done well even with really smart compiler writers etc :-).

     

    I think you meant performance (GHz) per Watt (TDP). And you're correct, intel are now focusing on this issue, but it'll be next year by they time they finally get something out to compete!


  7. Price wise; if you are a REAL gamer, go for the FX57, lots of power, lots of overclocking but hiiiigh price.

    For Dual Core, I'd suggest Intel, because they are cheaper.

    For Single Core, go for AMD.

    1064316684[/snapback]


    I agree if you're a gamer you'd go A64FX, and otherwise dual-core. But Intel announced a 130W TDP max for their Pentium D & EE 840 .. and they released a chip at that TDP in May (I think it was 620 or something). If you overclock, you'd have crazy temps, though if you're going dual-core you probably wouldn't anyway. I'd still go AMD for dual-core because of the better design wrt lower latency integrated memory controller & crossbar switch, higher performance (as you mentioned), lower TDP & probably much lower electricity bills in the long run with PowerNow etc esp if you use your PC a lot ;-). I'd only really go intel if there's was some platform benefit, but i can't see any at the moment. I still can't understand why someone would go the intel route right now, esp since next year everything changes .. so there's no realistic upgrade path after next year, probably!

  8. I used a serial/IRDA based one 5 years ago, it was a small dongle add on to a set of Ericsson phones, so you must be able to get them dirt cheap on ebay.At the physical level everything's analogue (audio & RF waves in this case)! Audio captured by the mic is digitised (A2D), compressed, encapsulated & streamed over the net. GSM is digital, and audio frames are sent over that network, however the underlying physical media is analogue RF waves but in digital-like pulses at very high frequencies, if you see what I mean ;-). It's kind of similar to digital chirps of a modem over analogue sound waves .. and eventually electrical pulses down a copper wire pair. The phones/basestations unpack that audio data ... but it could just as easily be data, in the case of WAP .. now GPRS/3G are data/packet-oriented nets, that can also carry voice traffic. 1st gen GSM/WAP wasn't designed as packet based, it's circuit based, so AFAIK you'd have to pay per second whilst connected .. not at all ideal for an el-cheapo person (myself included). One of the audio codecs used is CELP which uses linear predictions in the time domain by modelling the human vocal tract instead of the human hearing, so these codecs aren't great at coding anything else other that human speech .. worse than PSTN quality! It's ultra efficient, lossy, etc compression.Why would you want to convert digital data to sound & back again on digital cell nets like a modem, esp with GPRS/3G being common place now .. they're digital nets over radio .. of course this assumes they're cheap (they should be since packet nets are shared & far more efficient than dedicated circuit-switched voice nets, so the telcos should encourage its use given the limited available spectrum .. I'm glad push2talk is popular now, since perhaps this will force the telcos to encourage data use also ...)? PSTNs were setup for analogue voice first, years before doing it digitally was economical (ISDN was the 1st relatively common digital data/voice 'integrated' net AFAIK, over copper at the local loop still .. Fiber To The Curb etc means hopefully that'll eventually be nearly all fiber by 2020 maybe ;-)). Er, sorry for the tangent .. damn I do this far too often :-(.


  9. Sinistershadow's saying money's not an issue since the insurance co will pay for it ... so go for broke! I agree with the above, only things I'd add would be ..MB: a decent NF board that supports overclocking (adjustable multipliers, memory frequencies & timings, etc independent of the PCI bus frequencies ;-)) if you go the FX route! Ideally two PEGx16 slots.CPU: Athlon FX's are the best gaming CPUs since they are the highest clocked (get the highest available .. FX57), have large caches and are multiplier unlocked (for overclocking). X2's if gaming is not the most important thing to you, if running something complex in the background is something you think you'd like to do often & if software MPEG transcoding is important to you .. but with recent GPUs helping out that's less of an issue.RAM: 2GB of CAS2 DDR500 (now supported by some AMD chips)!GPU: two (SLI) GF 7800 UltraHDD: bunch of SCSI Seagate drives RAID'd....


  10. Hi Kam,

     

    Thanks a lot for your posting! Unfortunately all the graphic adapters I own are AGP slot, otherwise I would really like to try out to have three screens. Now I know why PCI graphic cards on ebay are more expensive than AGP cards. I think you're right, noone here needs 10 displays, three monitors would really be enough, who has a table with enough space for 10 screens? But I can imagine that the possibility of running ten (or even 40 with that driver trick) screens could be interesting for artists, video installations in galleries and so on.

     

    GreetingZ

    1064325458[/snapback]


    Hi Hazeshow, not to worry, you CAN still do it with AGP plus PCI cards. I've seen good mid-level gaming cards for PCI (GF5s), and of course AGP (GF6s) cards are still fine in most modern games (half life 2, doom3, far cry etc), and both have dual-head features so you can go up to 4 screens with decent gaming performance. There's no real need to go to PCIE right now unless you're a real gaming enthusiast. Personally, if you can, I'd wait for the crazy 4-way PCIE boards to come down in price maybe in a year or 2. Also DX10/Xbox360 will influence the marketplace a lot ... Of course the other side of that is that you could keep waiting forever!!

     

    He he, well, this is one famous setup (http://www.pibmug.com/files/wideview.jpg), but most people actually modify or DIY a custom desk!! With LCDs, though, this is now a bit easier to do without rearranging your whole room/desk for it :-). I remember there was a website that collected all these guys' setups with pics, but I've forgotten where that is, sorry, but you get the idea .. monitors going in ALL directions ;-).

     

    Also, there were several guys who actually setup projection based systems similar to pro/military systems .. but that requires extending the house, using a truck etc. One guy even bought the cockpit part of an old 737 .. it's in his basement or garage (the wife wasn't too happy, I remember, but resigned to it .. damn he's lucky .. er, or perhaps a bit too much of an extreme geek :-))).


  11. Thanks for the replies.  OK, I will look online for one of these adapters to plug into the cigarette lighter.  The laptop adapter says:

     

    Output: 12V    6.0A  120W

    1064323355[/snapback]


    Depressingly I couldn't find Maplin's 800W version online .. I'm sure there is/was one! Still Maplin UK has a 600W one for 40GBP (http://www.maplin.co.uk/?doy=8m10&ModuleNo=36316&). It's quite small (you could tuck it in the boot, glove compartment or even under a seat with a mini/nano-ITX board/case).

     

    I got this link off of a Google search for 12Vdc to 230Vac (https://www.portablepowertech.com/?aspxerrorpath=/productslist.aspx) and they go up to 2000W!! This reminded me that car/caravan & yacht/boat & airplane electronics shops have these too .. along with lots of other cool electronics like navigation, motorised satellite TV antennas, etc ... even for fast moving vehicles! 802.11/15/16/etc standards are also related interesting future tech to look at if you'd like wi-max like data rates for Net access from a fast moving car ;-). I can't wait for this, it'll totally change the current telco dominated landscape of relatively slooow & expensive 3G access ;-). Er, sorry for the tangent, I do that a lot.


  12. I do not use heatpipes, but I've read loads of reviews about them.

    You should make a difference in performance vs. application. A zalman heatpipe hdd won't give good results, just because hdd normaly don't need a hdd cooler (and it has insufficient cooling surface).

     

    Good point, that explains it, thanks a lot :-).

     

    For heatsinks, results can be very, very good, it depends a bit on the manufacturer (cheaper heatsink most times have a worse impementation -read: worse thermal paste or worse holes and pipes).

     

    I don't really like cheap stuff ... other than the stuff I make myself, he he ;-).

     

    Heatpipes also work best under certain angles, depende a bit which heatsink you have.

     

    Weird. Is that to do with convection current (hot air rising etc outside for passive air cooling the unit), or the hot gas rising and condensing to a liquid and then falling inside the tubing .. or something else entirely? Figured it can't be air bubbles since they're closed loops. Sorry for all the basic questions .. I'm not an expert on passive stuff, nor a thermodynamisist ;-).

     

    The heatsink cases (like the zalman tnt or the Hush) are realy cool (in fact, they get hot after a while :blink: ), there cooling capabilities are okay, but not the best.

     

    Totally. They all look so cool too .. not just full cases (I forgot about the Hush ... I'm so damn tempted, wish they'd stop tempting me!!) but also those tall passive external watercooling radiators!! But I guess I can't justify the cost when my simply watercooling/silent fan setup does the job for now! The Hush's are ideal for a lounge PC that doesn't look like a PC, as it were. Ditto those mini/nano-ITX boards/cases!! May be they'd be parent-proof if they boot off a write protected IDE plug-in flash drive or mini-CDR!

     

    I did underclocked and undervolted my pc, to the minimum. 100*11@1.08V :mellow: . I cooled it w/o a fan (SLK947u) and the temp rose to 60°c and the pc shuted down.

    1064323569[/snapback]

    Wow ... that'd be awesome in an el-cheapo PVR/media PC .. if it worked. Guess may be a combination of a moderate-cost heat-pipe kit with undervolting might be the next thing for me to try when I get a moment & some spare cash :-).

  13. Nah, undervolting is annoying since you have to manually set it up, or set it up permanently (after paying for an expensive fast CPU!!). Clock gating (speedstep/powernow) is not good enough. Power gating is next. Ideally I'd like to see the recent asynchronous ARM research bear fruit in the x86 world ... but that's probably being realistic since it's just too difficult to design currently (you have to get everything talking to each other in the chip, and at different speeds, one part will have to wait a bit for the others, and there's also got to be some handshaking to syncing all the different clocks, rather than just having a cental one etc .. nightmare!). It's easier to design a relatively simpler chip & bump down the clock when not in use. Asynchronous ARMs are cool though, in theory ... different parts of the chip can slow down, depending on the instructions being executed :-). Also it's cheaper just to modify a desktop core than to start from scratch ;-). And finally compiler design would be harder/different (see the async ARM compiler research). Transistor current leakage is also being worked on (see the recent IDF slides on their future manufacturing process). Google for asynchronous & intel/sun/arm/etc for a bit more in-depth info :-).Another area of interest here (to me anyway) is intel's announcement at the recent IDF about their plans to release an x86-based handtop running at a TENTH of the power consumption (0.5W :-)) of current mobile/laptop CPUs. Details are scarce, at the moment, though. There have been many handtops released recently, often based on stuff out of IBM Research .. bizarely big old IBM didn't want to commercialise something they solved, and now just let everyone else have it?!This is a much smarter & more sensible general-purpose approach than just running a 1GHz CPU at 100MHz all the time (undervolting), or some of the time (speedstep)! However, for now, that's all we have, I guess :-((.


  14. yes, pata is equal to ata, meaning pata is the second name for ata.. but i also never heard about 16mb ata drive... i think maybe they got it now... just buy it, if it is pata, then ata system can accept it..

    1064323690[/snapback]


    The TigerDirect page says both "Parallel ATA-133" and "Ultra ATA-133".

     

    http://www.pcworld.idg.com.au/ is a good page I just found explaining "PATA ... The EIDE interface, known retrospectively as Parallel ATA ... described as ATA133, Ultra DMA133, Ultra ATA133 or something similar. In this context, "ATA", "Ultra DMA" and "Ultra ATA" mean the same thing".

     

    http://forums.xisto.com/no_longer_exists/ is another page I just found when trying to find a review of your next HDD. I could find a PATA version review, but the interface (SATA/PATA) isn't important to an HDD Review since a single HDD never reaches that max speed, and the only issue is SATA has NCQ sometimes, which can improve performance if implemented right .. in this case it make no difference & they give both the enabled/disable benches anyway! I don't like Maxtor since I've seen loads of them go belly up in my lab .. I prefer Seagate's track record (from independent user surveys like those on SR .. but that's another matter!!).

     

    As you can see from that review/comparison, it does very well, only beaten by Raptors. And most of these desktop PATA/SATA HDDs are pretty much performing similarly in the overall worldbench scripts scores (http://techreport.com/review/7903/maxtor-diamondmax-10-hard-drive/9)! It doesn't do very well at the Windows boot time test (http://techreport.com/review/7903/maxtor-diamondmax-10-hard-drive/11) since the Raptors have the 10K/bandwidth advantage .. but the 7200.7's also beat the non-NCQ version of your next drive .. by quite a bit :-o. Still it's better than the really crappy v9 of the Maxtor which takes nearly twice as long as the top Raptor to load XP, Doom3, etc. Finally, in (http://techreport.com/review/7903/maxtor-diamondmax-10-hard-drive/12) you can see that noise & heat are OK too, relative to other similar priced drives.

     

    Drives with 16MB caches have mainly aimed at multimedia/AV enthusiasts (there are many SCSI ones I've seen a while ago). More cache won't necessarily be the smartest/fastest thing to do. It depends on the app, the FS block size (should be good for big AV files where you set the block size to be pretty big .. this increases transfer efficiency but wastes space if there's lots of small files ... usually most files are small, so it's a compromise as with all things!) and the HDD algorithms (how much queue cache is available for NCQ isn't that important if the queue length itself is tiny .. but to improve this would require spending much much more on a pro setup!!).

     

    Basically, if you're not doing AV work, and don't mind buying a cheap SATA PCI card (SATA is at 150MB/s max, but today HDDs are normally at about 55-65-90MB/s depending on if it's like your drive, a Raptor or a 15k drive!), it's probably best to get a SATA Raptor or two if you want max performance on a non-pro setup. If not, then this definitely looks like a good drive (other than my personal dislike of Maxtor and generally drives of limited warranties .. but perhaps I'm a bit paranoid after seeing so many crashes ;-)).

     

    Thanks for pointing this out, BTW .. very interesting drive nonetheless!


  15. I dont care what the specs and reviews say... im getting a friggin revolution.. ive been sold on it ever since it was first announced.. yes you guessed it.. im crazy bout some nintendo haha.. but man when they revealed that controller.. good gravy i have dreams bout playin it haha... revoltuion for me!!!

    1064324549[/snapback]


    I've got to admit I'm very impressed with Ninty's guts. These devices (positional sensing by use of MEMS inertial sensors) have been around for a few years .. I've used a Gyration mouse which was pretty naff (Ninty has invested in Gyration, and presumably get to use some of their IP). Recently these sensors have really improved, and are small enough to allow us to sense all 18 parameters needed to calculate the current sample's relative position (along with other sensors to correct for those sensors errors & to help the positional algorthms). This was a really bold move, from a company that has a history of spending time to do things right & often different to everyone else (Mario64's 3D world, yes I know Tomb Raider did too but they were 1st to show it so who knows!, the DS touchscreen, yes touchscreens have been around but not really used in games let alone an entire platform setup around it, windwaker's cell shading, gameboy, the NES & game&watch controllers, the low-cost FX chip idea thanks to Argonaut/starfox was also pretty innovative ... the list is endless).

     

    I'm not a Ninty fan, in the sense that I don't buy/play console games, as I said before. But I do respect them, they obviously have some smart/bold people there that are happy to take calculated chances, to push the boundaries .. it's very similar to academic research, in a way ... but obviously harder in the sense that they've got to convince a huge audience ;-). I hope they continue as a hardware & software co, unlike Sega.

     

    Sega's arcade platform is pretty interesting. It's a PC (intel CPU, Nvidia GPU). Presumably, if consoles weren't constrained financially, they would be similar to PCs too (like the original xbox). To me, the hardware isn't the important thing. It's the software. So if I was to buy a console, it'd be purely based on which platform had the most number of my fave games (in my case flight sims, race sims, graphic adventures, RTSs & FPS's!!).

     

    ShadowX, I heard earlier that MS might make two versions of xbox360. One with a cabled controller and no HDD, and one with one wireless controller & an HDD. No idea if they still use a USB-like interface (so all your steering wheels, joysticks, etc work) ... it'd be annoying to have to sell/rebuy all this stuff, esp if you buy relatively expensive ones with decent forcefeedback controllers etc. Anyway, I presume you mean you are gonna get the higher-end version :-). But note that the current specs state that PS3 also has an HDD. In both cases they are detachable & they say end-user upgradeable (i.e. without having to open them up, figure out encryption/interfaces, etc & doing it unofficially)!

     

    Also, the XB will probably use HD-DVD, and the PS3 will use BRD (blu-ray) ... there's still a battle between these formats, though both should be backwards compatible enough to play the older SD DVD movies & DVD/CD audio etc. The XB's ATI GPU should be similar to DX10-type specs so should be more programmable and have high-end pixel shader units. The PS's NV GPU is similar in performance to two current high-end GFs. That too is impressive. I think both have their advantages, both they are pretty much similar! Honestly, I think in terms of AI/CPU and graphics/GPU (and physics) they are pretty much the same. It'll all be about the games, the ease of use/functionality such as xbox live, and pricing/marketing/etc. None of them are particularly impressive compared to current PCs, which will soon have extremely powerful physics (PPUs) and sound (X-Fi) ... but again you have to make compromises with all machines, consoles & PCs, based mainly on price!

     

    I got to agree with you, ShadowX, I too think the xbox360 looks like the best of the bunch, but only since I think xb-live was brilliantly executed, and they are claiming they are improving it for their 2nd console :-).

     

    People have looked at making game engines multi-threaded, but it always ends up being mostly focused on 1 core. So I think, at least near term, both PS3's VPUs & two of the three XB360's PPC chips will go mostly unused! Also, note that XB360's CPUs do have a VPU aswell (the VMX-128 .. which has 128 registers .. I'm guessing it'll be like SSE or Altivec of the old PPCs). Sony claims total system performance of 2Teraflops, and MS claim 1Tflop ... Sony claims the CPUs can output 218gigflops AND THAT THE GPU IS AT 1.8 TERAFLOPS. In other words it's the GPU that they get their numbers from. Unfortunately I've had a play around programming certain simple physics algorithms the current-gen GPUs, and I get about the same performance from my current CPU (and a lot worse if I compile the code with SSE3 optimisations). So though GPUs are theoretically much faster at FPU ops, they are currently too custom, so all the language work arounds slow it down somewhat, to the point where it ends up being similar to the general-purpose programmable CPUs that are clocked at much higher rates (few GHz vs 100sMHz)!! So now I know for sure, this is all total hype. Sure the next-gen GPUs will be more programmable, but they've been mostly talking about how great Cell is and then it contributes a puny % to their overall 2TF figures. I'm not disappointed, though ... I expected this kind of marketing crap from Sony (they do it at every new console) ... I was just expecting it to be because real-world benches never come close to theoretical ones (esp when Sony restrict the caches of their VPUs so badly)! There was all this talk about Cell taking over from PCs. Embedded chips are everywhere, but they are not as capable at general processing tasks like modern PC CPUs are (soft modems, soft HD audio, soft LAN filtering, software MPEG transcoders, etc). And also, XB's ATI GPU is said to be more programmable than NV's future GPU ... working with MS meant they focused on DX10 functionality, unifying pixel & vertex shaders and doing virtual graphics memory. So yes XB360 does look like it's going to be pretty awesome from that respect!

     

    XB360 has 10MB of e-VRAM (on the GPU chip .. thanks to ATI buying up a small graphics company called ArtX I think that was trying to do this about 5 years ago .. allowing a crazy 256GB/s!!), 500MB of 700MHz GDDR3 unified system/graphics RAM (22.4GB/s ... but with only a 21.6GB/s FSB ... that final figure is the most important IMHO ... the eDRAM really just works as a cache to make up for the lack of dedicated VRAM, I think ... if anyone knows for sure, please correct me). PS3 has 256MB RAM at 25.6GB/s, and 256MB GDDR3 VRAM (with the RSX GPU able to read at 15GB/s, and write at 20GB/s). Summarising this, XB360 has 21.6GB/s shared for CPU&GPU but with a superfast but very small cache and potentially twice as much VRAM thanks to DX10 virtual VRAM, and PS3 has 25.6GB/s for CPU & 15-20GB/s for GPU. It's not clear cut to me which will perform better in which games, but looking at shared memory buses between two CPUs in intel's CPU SMP architectures vs AMD's design, it's clear that it CAN be OK but only to a certain performance point. Remember the XB360 must share half the total PS3 bandwidth between THREE PPC CPUs and a GPU ... I would prefer dedicated memory & buses, but they had to make a cost-cutting compromise :-( ... rather than force punters to pay for 2x512MB (or lose billions of Dollars like they did for XBv1)!!


  16. You just need 2 screens and 2 VGA or DVI ports, either on 1 or 2 cards. It's known as dual-head or multi-head. You don't have to stop at 2 either. Computerjoe, since you dislike having two LCD bezels blocking your gaming enjoyment, you could go for three, and have the other two angled and at the side of the middle one. There are other methods too, but they are not meant for consumers and currently cost too much for mainstream users.Matrox used to be a leader, and are famous for having quality 2D, but now only really focuses on 2D and multihead, and decent cards often have decent TMDS chips anyway, so there's no advantage. Their Parhelia drivers are good for multihead gaming, but Nvidia's are pretty good too in this area. ATI however (at least last time I checked last year) suck in this respect. Also, many games simply don't take advantage of these driver features anyway. I think there was an old THG article that spoke about how cool it'd be to have 3-head racing games to see opponents trying to overtake you without having to use trackers/HMDs. It also mentioned a use in RTS games where you could have a zoomed out view on one of the screens, and in online games where chat could be on a separate screen. There are so many uses even beyond simply stretching out the viewpoint ... but most developers just don't seem to want to bother!! Perhaps it's recent popularity, thanks to thin gaming LCDs allowing the use of further desktop space, may eventually change their minds.There's no need for two graphics cards, it costs nothing these days to add an extra DVI transmitter chip & socket. Adding a 2nd card can however increase your 3D performance (look up Nvidia's SLI on Google .. ATI also have an equivalent though it's not very good!).It's best to get two DVI ports, rather than DVI & VGA. AFAIK I though most of the modern gaming cards I've seen had DVI ports not VGA ones. As for TV gaming .. someone mentioned FPS's .. I'm amazed. I suck at FPSs when on a console (suck more than usual ;-)). Maybe it's the lack of a mouse, but I think it's also due to not being able to see as much (TV is at 640x480 unless you have an HDTV .. I guess eventually we need HDMI so I can't see many bothering with this yet .. compared to high-end games that regularly run at 1600x1200) and having constrained reaction timings (normal TV is at 24/30Hz interlaced, so at best we'd get 60Hz games ... which is pretty lame compared to 8ms responses at 125Hz and decent cards can get to 125fps in many games). TV it seems is still far from ideal for FPS games in my mind. Of course, it's a 'free' option, so what the heck!Finally, at the really high-end, there are cards that have 4 or even 8 outputs (the latter are only 2D focused really, by Colographic, but they have low-end ATI based ones too, but the latter have occurred once or twice in ultra high-end gaming cards, and pro cards).In summary, Nvidia seemed to have the best multi-head drivers (and the best drivers overall for many other things, IMHO .. but there's no arguing that ATI cards have their advantages too .. see my other recent post today on ATI/NV gaming benchmarks). And since most games have no support for this, even today, I'd prefer to buy a really high-res fast LCD like Apple's HD LCDs, and then pump dual-link DVI data into it! ATI at the consumer-end is potentially going to lead here in the non-Mac field, though there relevant cards aren't out yet (see their X1800 series).In non-gaming settings it's definitely useful to be able to setup certain sets of apps at different screens (e.g. email/chat on a separate screen from your main work, so you can respond ASAP without having to minimise the work apps etc).Organicbmx, better resolution depends on the type of DVI TMDS chips at your card & LCD, and their specs of course. The high-end is at dual-link DVI (the data channel is twice as fast, basically).Hazeshow, yes, you can easily go up to 4 screens, no probs, with either 3-4 cards with single outputs, or two cards with two outputs each. There are limits to scaling up however. The current Windows consumer OS, XP, can handle up to 10 screens (http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/display_multi_monitors_overview.mspx), though you can use some custom drivers to trick windows around this by making it think 4 screens are actually just one (I think it's a software only solution but it might have some FPGA box in there too .. sorry I've forgotten, saw it last year, and I'm sure you can find it on Google by searching for multi-head (it was a small company dedicated to this area). Linux/X also have certain limits, and it was >10 for sure. But I don't think anyone here (home users, gamers etc) really needs (or can afford) that many displays .. however there are some famous modders with decent flight sim setups with loads of CRTs or LCDs .. I'm very jealous of those guys, since I love flight sims!!! Many flight sims support programmability, for things like putting instruments on certain screens, or for accessing/setting up servers for traffic control, chatter, real world weather/time ... but that's a specialist topic, sorry for going off on this tangent ;-).Anyway, I'd finally like to mention that some of you may be interested in looking at PCIE if you're getting into multi-head. There are many dual PEGx16 boards, and even one FOUR (yep FOUR) PEGx16 board demo'd recently at a computing show in Taiwan (I think). I'm sure this kind of thing will eventually become common/cheap, as with all things in computing!


  17. They're both right. The PSP games look like PS2 games at quarter VGA resolutions. The tech is definitely very impressive in that they fit this much power in such a small size. It's CPU has an embedded VPU & 8MB eDRAM, it has two GPUs with 2MB eDRAM to handle both polygons (33M textured&lit poly/sec, 664Mpixel/sec fill rate) and curves (beziers, splines with tesselation to polys .. and even hardware accelerated skeletal animation), and it has video (MPEG) & sound (virtualised multi-channel) processors.The PS2's GS (Graphics Synthesizer) chip can output 75M polys/sec, and the EE (Emotion Engine) 66M polys/sec, 38M lit polys/sec or 16M polys/sec as part of a curve. These are all theoretical benchmarks, which mean nothing in the real-world gaming benches. Unfortunately being a closed platform for consumers I don't know of any standard benches for any console. But just taking all these theoretical peak figures, about a 1/3 the power, but running at 1/4 the resolution, so it should be able to look better than PS2 games in theory .. eventually! This is all speculation at the moment, can't do much more than that!However I have seen some videos of PSP games and they do look like PS2 games, if a little blocky sometimes (due to the low screen res, but not distractingly so). Basically if you like console games, and have a lot of spare time when being mobile, then it's probably worth the cost ... but only as a gaming machine really!Unfortunately, you see, it relies on proprietary storage .. UMDs are read-only (and I doubt anyone will bother making a burner for those), or flash memory (Sony's Memory Stick)!! So unless you want to repurchase a whole load of movies on UMD, the movies would need to be on expensive 1GB Sony MS's ... and flash memory is usually expensive anyway compared to HDDs let alone Sony proprietary stuff too .. it also has a limited number of writes! For audio, Sony made the recent decision to finally support mp3 rather than simply just their own ATRAC formats! Also, the low screen res means you won't watch at DVD res let alone HD res's! CODECs wise it can handle mp3/ATRAC, MPEG4 etc ... which means annoying transcoding if you prefer to store AV in HD video, HD audio or open/non-lossy audio formats.Personally, I'm more of a PC gamer, since I often tend to bored of many console games very quickly. The only console I can remember that had a good flight sim was the PC Engine (which also had a portable variant, the GT) .. Falcon was the game. I'd much rather put that money towards an ultra-portable laptop with a decent 3D chip, DVD drive, higher-res LCD and decent battery life. I could then use it for PDA & work tasks too!Summary: if you love console games and are often on the move, go for it. The only other mobile options are the DS (if you specifically like the touchscreen games or Miyamoto games), a decent gaming laptop or a subnotebook (if like me you are entertained for hours by adventure games, strategy games & PC RPGs ;-)).


  18. If you don't care about other things (video encoding, pro app performance) & only 3D gaming performance at normal resolutions, then it really is between Nvidia & ATI. Performance in different game engines varies depending on NV/ATI drivers (though it often improves over time too!).

    http://www.xbitlabs.com/articles/video/print/2005-17gpu.html is a good recent summary of 17 graphics cards in 30 games, and this is one (http://www.xbitlabs.com/articles/video/print/half-life.html) that focuses on half life 2 performance. As you can see, NV has amazing DoomIII-engine performance (where there are tons of agents moving around on screen), and ATI has the upper hand in Half Life 2 thanks to their decent pixel shader tech. In Far Cry it's fairly neck & neck.

    At the high-end, on the Nvidia side, it's a choice of 6800 & 7800 series cards. With ATI they recently announced their X1800XL/XT which are the first NV/ATI non-pro non-mac cards that have two dual-link DVI TMDS/RAMDAC's at 400MHz, AFAIK, after the GF DDL mac card .. so if that's important to you, it might be worth waiting for those to come out & come down in price!

    If you are going to PCIE & are replacing the board, for high end gaming there's the choice of going for SLI (two graphics cards working together to nearly double performance but only in certain games & only with two of the same GPUs ... SLI2 may fix these issues). If you do go this route, try to get both PICEs with 16 lanes (not simply a x16 phy).

    Another choice is to go for a dual-chip GF SLI setup on a single card .. there are only 2-3 of those, & none are that great at the moment unless you specifically want some of their features.

    Overclocking is another issue you should consider.

    Finally, if you are going for a new PCIE system, then it's advisable to go 64-bit, not just for the 64-bit Far Cry update (which draws the scene further into the distance, etc), but also for as an investment on future mainstream OS's (Vista) & games.

    Sorry I haven't been very specific, but the two links have pages of detailed stats. Also, if you give a more specific price (is it 100 or 200USD??), I can probably give a specific short list of cards. Personally, at this price range, I'd get one of the old BFG 6800 Ultras (they used to be 250USD a few months ago).


  19. If you hate the default blue on Windows 'classic' desktop (2k) on XP, like I do, then just right click on the desktop (or go to control panel & click the Display applet), go to the desktop tab, select the 'color' drop down menu ... I select black ... much much better :-).I agree, Gnome/KDE are very nice & will soon have most of the Aqua hardware accelerated features most likely within X. I really love Aqua .. it looks great AND isn't slow or a memory hog, thanks to wisely using the GPU. And their PDF viewer just flies, thanks to their thinner coding (cf Adobe's) and their use of Display Postscript etc ... it's still not perfect, but is by far the closest yet!There are some add-ons to Windows desktops, like Window Blinds, but I found they made things even more bloated, so you just end up turning everything off .. in Windows!!


  20. For a desktop/server, uptime/stability is definitely more useful. You can use ACPI etc to send the machine to sleep (there are various states either saving to RAM i.e. S3/standby or to the HDD i.e. S4/hibernate), and that way you're 'up & running' ASAP without incurring an adverse electricity bill! MS's XP goals were 30sec boot, 20secs hibernate, 5secs standby. They are trying to reduce it further in Vista & by using flash-based SSD/HDD combination devices .. but the latter was also meant for mobiles to reduce power consumption of mechanical rotating HDDs.

    It's also a bit misleading in the sense that, in my case, with minimal params, the BIOS checks take about 5secs (mostly on the RAM, netboot, etc), XP takes another 10secs to get to the login screen, and then it takes a further 5-10secs to actual get to a usable desktop with around 30 processes running in the background! Basically, things are still loading when XP shows users the login screen! Not that important though.

    For mobiles, though, it is important. MS focused on fast booting for Win XP, so I've never seen a machine take longer than 40secs once this came out (obviously I've not tried installing it on a 286 or something!).

    1. disable things that you don't need in the BIOS (e.g. some disk controllers take an age to check through each ID).

    2. You should minimise the services that start on bootup .. some of the ones I've seen on friends machines were only used rarely so I changed these to manual startup (though often you'd need admin rights to restart those AFAIK).

    3. defrag, use bootviz (I agree with the earlier poster .. it's very very useful to help you focus on the slowest problem areas .. e.g. drivers that take a long time to load, dumb services that retry several times to find something on the network that's no longer there, etc). A person on another forum notices a Nero service added about 15secs to their boot time!!

    4. upgrade the RAM, and/or HDD. The former is obvious, it reduces the need to access the HDD all the time. For the latter, there are two issues here: access times (the latency time for the head to get around to the correct position to get the data) and bandwidth (the amount of data per sec the interface can transfer). To reduce the former you need faster rotation speeds or more RAM/cache/etc. To increase the latter you need more modern interfaces, and a RAID array to max out that interface. SATA/NCQ & SCSI drives have intelligent queues to handle multiple requests, so that's another option. Another thing that's sometimes discusses are the various SSDs that are available at horrendous costs! They're not that useful, as Anand found out .. the BIOS ends up being the main factor in the boot time. Now, you could install a thin BIOS (if your board was supported by it), but then you'd be limited in which OS's you could run (2k not XP) .. still you could move to an embedded Linux and have a much faster boot up .. but I presume you want Windows XP & to be able to run normal apps like games, so this option is probably out!!

    http://www.anandtech.com/print/1742 is the Anand's review of the Gigabyte SSD. Notice the single SATA Raptor booted Windows in 14s, and the (admittedly limited) 4GB SSD did it in 9s. It takes power from the PCI, data is over a SATA connector.

    Personally I don't see the point. Ideally, I'd much rather just avoid booting up altogether (by making sure I keep my main OS stable, secure etc), and then use that RAM in their native sockets (unless I had some old RAM that wouldn't go in the board & wouldn't sell for much!) as a RAM drive to use them at full speed (rather than SATA speeds) for caching apps or large files so they'd load quicker & would be faster to work with. Linux has some really good RAM caching algorithms built in ... I'm not sure about Windows but there are tons of soft-ramdrive vendors' ads that's I've seen.


  21. To me they're quite different games.

    I played MS's train sim last year & thought it had some really amazing graphics especially the steam effects on some of the older trains .. very pretty. But didn't play it that long to really get into it. Anyway, it is the most popular wrt sales I think. The focus is more on the actual sim, i.e. aspects of driving the train itself.

    I played the 2004 version of Trainz (2006 is the current version) and it was graphically a bit behind MS (to me MS is focused on realistic visuals & trainz is more cartoony .. but I don't care much about graphics as you'll see below I like strategy games like railroad tycoon!), however it's got some unique features that make it last longer IMHO (but again it's subjective .. I'm not that interested in the driving aspects, but more into strategy games etc!!). It even has some railroad tycoon like strategy aspects. It focuses on the transport links themselves and transporting goods etc. AI can even take over the driving somewhat, but I'm guessing that's not the sort of game most people are after! If you liked railroad tycoon, then you should try this out!

    And that bring me to railroad tycoon! I haven't played the new version of railroad tycoon (v3) yet, but saw a video review a while ago and I remember thinking it looks even better than v2 (perhaps that's expected, but railroad tycoon 2 was one of the most addictive/fun games I'd played in this area, so that's saying something ... but then again that was years ago)! This goes into more depth in the business aspects, competition etc, which to me was a lot of fun ... but again it's a very different game to the other two!!

    http://www.uktrainsim.com/ might help your decision ... it is mainly a MS train sim site, but it does have a whole load of reviews incl several on the trainz sim.


  22. I didn't want to post a zillion short replies, so I'll use the acronym "WRT" meaning "with respect to" to refer to other posts (sorry if this is annoying).

     

    WRT #1, thanks for the really fun to read article. There were so many references to diverse fields of physics :P. There are tons of things I (and even the authors) just don't get yet ... like the delta-T, the pulsed-DC requirement, and the whole what on Earth's going on deal!!

     

    If this is a "low energy nuclear reaction" shouldn't there be some detectable H2 mass loss through radiation .. the vacuum chamber isn't lead etc (or at least they don't mention anything about safety issues wrt this!). So perhaps it's one of the other 2 types, but I'm not a physicist so can't really comment! Either way, I doubt the energy just "appears" .. it must be coming from somewhere, if this isn't an error .. the authors just don't know where yet.

     

    In the article replies, the leading researcher in H2 generation complains about errors in the terminology & states this is nothing more than energy absorbtion/dissipation differences (at different rates) over time ... but it still looks as if there's too much heat vs the electricity input (unless it was a measurement error ... and it's damn unlikely that all the authors (and their are many others in this group/field) would not have taken those necessary measurements correctly). Anyway, as the author states, until it is truely independently generating energy, it's probably just going to go the way of so many other related and misunderstood energy devices!!

     

    WRT #2, yeah I remember this too, I think it was in New Scientist magazine early this year .. a bacteria-based fuel cell where the bacteria produced methane or some hydrogen derivative. I rememberr it was very efficient but could only produce a small # of milli-watts output (presumably that was just a prototype & they could scale up given no volume limitations!).

     

    WRT #3, AFAIK patents are published (and in the public domain after 20 years) so it's not like closed-sourcecode that is deliberately hidden from us (although there are decompilers, the output is usually not perfect due to the large number of possibilities available in the more English-like high-level programming language you want to convert to!) .. we could in theory replicate them ourselves given knowledge/time/money & even to sell them once the patents expire after 20 years (in the USPO case). It's only not allowed to sell them beforehand. But again a reply in the article mentions the valid patent is a minor extention to a previous expired patent, so it's unlikely to be an issue at all, for this device. I hate the fact that medicines have patents (so the poor in Africa can't afford it, or even worse the big pharma's can't be @r5ed to research the AIDS derivatives that exist in Africa even though they've already done so for the ones in the West .. that's overly capitalistic in my book, but I guess they must answer to their shareholder's pocketbooks or be sacked!!). I hate even more that mathematics has now been given patents/copyrights for the first time in centuries since it gets tied to CS algorithms (there was a damning article in the July [i think] 2005 edition IEEE Spectrum magazine)!

     

    WRT #5 I'm sure the greedy 7u<<er5s will also try to somehow get us hooked on a hydrogen economy even though hydrogen is so abundant on Earth (water) and in the universe .. probably by making it less cost effective or sensible (perhaps by scare mongering on safety issues or making safe storage expensive through safety legislations .. even though Mercedes Benz & BMW have already demo'd powdered or frozen storage in cars & gas stations!) for us to simply DIY produce the source! Perhaps this is the real delay .. if they wanted the infrastructure in place, I'm sure they could convert/add-on to at least SOME of their existing petrol stations!

     

    WRT #7, the scientific term is "conservation of energy". Basically nothing can be gained or lost, only converted from one state to another (even matter to energy in the case of nuclear reactions). Personally, I believe in this principle, as we've never seen proof to think otherwise ... however two cutting edge physics research fields are looking at the possibility of n'th dimensions to explain gravitational force weakness and matter/blackhole disappearances ... I always like to keep an open mind in life, esp since we still understand so little and may never understand everything (according to many famous scientists) B).


  23. If you have any insight into how I would be able to share that SATA HD without the need for a functioning Windows O/S on my side.

    -PatchCR

    1064317561[/snapback]


    Most NAS enclosures are expensive (250quid is the cheapest I've seen) and just run a customised embedded UNIX distro on a low-end low-power board! The most famous/popular is by SnapAppliance and is iSCSI based (SCSI over TCP/IP/Ethernet). Have you thought about buying a low-cost mini-ITX board, and setting it up as a file/print/anything-server?

     

    One other idea might be to use a dual-ported SAS drive ... but that would mean buying a new drive, which I'm guessing isn't a solution for you. If you are interested, take a look at http://forums.xisto.com/no_longer_exists/

    Basically, a SAS drive has two SAS ports on it that can connect to two independent SAS controllers, for redundancy purposes. As far as I can tell, the two controllers could be in two different PC's ... but I haven't tried that out so it's best to ask your drive manufacturer before spending this sort of money ... I'm interested in trying it out, but that's most likely only after I can find a cheap SAS controller!!


  24. I would go further and say that overclocking is overrated ..it has been established that its not the clock spped of system which dictates the performance , but overall design (choice of I/O, sycned memory etc)

    1064322889[/snapback]


    It's not just a faster CPU, but in doing so you also often increase the FSB (the speed that the memory talks to the CPU .. hence the need for fast, ideally low-latency RAM in doing this). This can improve performance of some FPS's.

     

    Related to this, I'm not sure what hassle it'd be to try and find appropriate overclocking RAM for laptops .. and I agree that without decent space for cooling, overclocking is probably pointless .. unless you're lucky enough to find a really overclockable chip (the equivalent of a P4C, for example, that can be overclocked quite a bit even just using air cooling).

     

    Personally, I'd just buy a decent desktop to overclock, and have a laptop for mundane productivity stuff on the move .. my laptop doesn't even have a proper (NV/ATI) GPU in it .. so my advise is probably not the most appropriate for ya.


  25. Nice ! If you had lots of more money, you would be client for an optical juke-box device.

    1064323122[/snapback]


    He he .. I actually bought a cheap PowerFile jukebox on eBay for 500bucks and I'm in the process of trying to mod it! I did look at those pro jukes and they cost 1000s whilst not being upgradable. I even thought about making my own (there was a guy online with a website about his lego mindstorms autochanger) but the performance (speed of changing) and accuracy/precision of the robot arm wasn't too hot.

     

    Actually, I don't mind changing discs that much, it's really for when I'm at work, it's handy to be able to load a specific disc in to it! With my CDMRW discs, my filesystem actually overflows onto some of these optical discs, so it's sometimes a necessity to load a disc in to run a program even!!

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.