December 24th, 2012, 01:04 AM
Determine client processing limits without a stress test.
I don't know where to put this because I didn't see a "server side systems" forum. I'm also relatively ignorant of how server side processes work on a greater scope.
I wish to determine a user's client computational device's processing capabilities.
I was wondering about the most relevant information available for estimating.
I know there are HTTP meta data chunks that are passed to the server with every request.
I believe "device" or "operating system" is at least one of them.
I know this because of my experience on facepunch.com ( when you make a post it lets users see what OS you're coming from )
I don't need specifics. I just need to have a ball park guess.
I need this data so that I can guide my MVC flow properly.
December 24th, 2012, 09:11 AM
Strictly speaking the most relevant information will be things like CPU, total and available RAM, and background services installed and running. You can't get that.
If you want to measure the capabilities of the browser then that's an entirely different thing. You can generally grab the OS and browser from the User-Agent string submitted with HTTP requests.
So my question is this: what capabilities do you actually care about? DirectX 11 support? CSS 3's border-radius? 3D rendering? File uploads? Some of these you can determine from the server, some of these you cannot?
Last edited by requinix; December 24th, 2012 at 07:53 PM.
December 24th, 2012, 10:35 AM
Processing Speed is what I'm concerned about. So RAM would be nice to have, but I can't have that it seems. My next best option is gauging my view-exports based on device.
Origin of my concern:
I've seen some "mobile optimized sites" that change with the width of the browser, but I don't want to optimize when data has already been sent to the client end, because I'm not optimizing display, I'm optimizing the processes that run on the client end. (and sense I already have a logic structure for that, I might as well manage the display through the same system)
I could be just being paranoid about processing capabilities though. Perhaps it's something I shouldn't be worried about. But the structure will be useful once we enter into the age of silicoon-photonics. The computers we use now will need to be optimized.
Alright, I'm rambling now
December 24th, 2012, 07:56 PM
I say give it a shot: run the stuff client-side and see how it performs. Make a couple prototypes that would be more strenuous than the real version and test them. I can tell you that the choice of browser makes more of a difference than processing speed or available memory.
If you discover that it lags too much on the systems you care most about, move the cacheable and/or client-agnostic and/or heaviest processing (if any of that applies) to the server side and try again.