Pro/ENGINEER Performance of High End x86 CPUs (Athlon vs P3)
by Anand Shimpi & Dan Kroushl on January 3, 2000 3:38 AM EST- Posted in
- CPUs
Pro/E’s Demands
As we alluded to earlier, Pro/E is a very demanding application. The models created within Pro/E are generally quite large and manipulating them is what puts the greatest strain on the systems that run Pro/E.
For starters, the application and the specific tasks that its users put it through can result in very demanding memory requirements. The typical Pro/E workstation uses at least 256MB of RAM, but seeing a workstation with over 1GB of RAM is not uncommon.
Common Pro/E assemblies can contain up to 5000 components, which is a challenge for any workstation to deal with. Just reorienting an assembly to get a good look at the area in which you are going to reference your next component can take over a minute. An assembly cross-section can take 20 minutes, a global interference check, 30 minutes. Add up all of those minutes and you get hours and hours of thumb twiddling. And, not to mention, sore thumbs.
So in the end, we have an application that is very demanding on system requirements both from the memory perspective and from the CPU perspective, and, at the same time, we are looking to make use of the latest x86 processors as the basis for an affordable workstation capable of driving this very demanding application.
In order to find out which systems perform the best in any application under any situation we always turn to application specific benchmarks. If a user wants to know what video card runs a game like Quake III Arena or Unreal Tournament better than the rest, they look at timedemo scores, and if a user wants to know what CPU is best suited for a Pro/E workstation they turn to BENCH99, SPECapc, and the OCUS scores. Just as demo001 and UTbench are not necessarily familiar benchmarks to all Pro/E users, BENCH99, SPECapc and the OCUS are not necessarily familiar benchmarks to all Quake III and Unreal Tournament fanatics. In order to establish what these industry standard benchmarks are, let’s take a look at where their results are published and what those results represent.
Pro/E Benchmarks
Pro/E benchmarks are not too different from the Winstone and SYSMark benchmarks that AnandTech readers are used to. Both Winstone and SYSMark test the performance of various applications by running a variety of different “real world” tasks on a set of sample data. Whether that sample data is a word processing document, a spreadsheet or even an image file is dependent upon the particular application that is being benchmarked. Pro/E is no different. In the case of Pro/E, the sample data comes in the form of a pre-designed “part” that is being manipulated during the course of the test.
With Winstone and SYSMark, results are usually reported as a number that illustrates how well the system being benchmarked compares to a baseline system. For example, a Content Creation Winstone 2000 score of 10.0 indicates performance equal to that of the Content Creation Winstone 2000 base test machine, and a score of 20.0 means performance double of that of the base machine. Performance is calculated according to how long it takes for the system being tested to run through the various parts of the benchmark.
Similarly enough, Pro/E benchmarks generally report performance in terms of the time required to complete various calculations and manipulations dealing with the test part used in the benchmark. So understanding what these benchmarks represent isn’t too difficult, at least for someone who has already been exposed to performance benchmarks of this nature.
1 Comments
View All Comments
dac7nco - Tuesday, June 28, 2011 - link
My phone is faster than a DEC Alpha; greetings from the future!