July 16th, 2013, 08:53 PM
One of the primary reasons that dmittner has given for implementing multi-threading is to reduce the performance cost of having to reload the same PHP code on every page request (by keeping it running between requests).
Reducing the performance cost of having to reload the same PHP code on every page request is exactly what APC can do too, even if it does it in a completely different way.
Therefore if your primary goal is to reduce the performance cost of having to reload the same PHP code on every page request, then it makes complete sense to consider APC as an alternative to multi-threading.
This isn't a restriction of pthreads specifically, this is a restriction of multi-threading.
Threads are not "free" so you can't create an infinite amount of them. If you create too many it will bring down the application when it runs out of resources. There is no precise definition of too many; it is not closely related to the number of cores the system has.
July 17th, 2013, 02:07 AM
Originally Posted by E-Oreo
Is there a way to do asynchronous programming in PHP without multithreading? For instance, if I determine I can only support 10 (or whatever) threads at the same time, that means I can only serve 10 clients at a time--given my current model. I'm not sure how to serve multiple clients at once without each one having their own thread.
If there's another mechanism for asynchronous programming then I could instead spread the load across threads via a task handler that delegates tasks out to the X number of threads I decide to set as the max.
July 17th, 2013, 03:46 AM
I may be naive here, (and, on re-reading what I'm about to post, wandering a little off topic) but I tend not to worry about concurrency in PHP, because I can achieve that at the server level. nginx claims to support tens of thousands of concurrent connections....
...and without php level concurrency I have therefore taken two paradigms:
1 - minimise the resources and time each request takes
2 - look at how many requests I can handle per second, ie the rate
It is very unlikely that any of the servers I am deploying would get 10,000 at once, even at a high load, 10,000 per second would be huge for me.
So lets look at a rate of 10,000 requests per second:
Given that I have load request response times of (average) 30 ms then even without concurrency that's 33 per second. So If I wanted to process 10000 requests per second, I would need concurrency of ~300. With average memory usage of 1.5Mb (reported by PHP - not sure what else nginx needs on top of this) then I need 450Mb free RAM minimum. On my 512Mb dev VPS I have 368Mb Free. This will of course reduce as the database grows...current max 8096 requests per second.
So, for me, the question really is: Can pthreads improve my request-response rate? It would need to reduce response times without incurring more memory consumption? This is quite important for me as the application I am building will be deployed on a (cloud) server per client basis, and the size of the server will depend on the number of users the client has. I'm already using apc for opcode caching. With a tiny code base, I only need a small cache size
July 17th, 2013, 06:35 AM
There probably are calculations that you can do to determine the theoretical maximum capacity of your stack, however, they aren't calculations I would bother to do, at least not without an intricate (far more intricate than I posses) knowledge of each piece of software in the stack. It's highly unlikely that your operating system will support 10's of thousands of processes (php-fpm I assume to be your execution method for the interpreter), nor would I expect run of the mill sql or nosql setups to be able to service anywhere near that amount of activity ... so, these aren't useful things to think about in my view. Maybe somewhere, without thinking, I make these decisions, but sit down with pen and paper and a calculator I do not, and I don't think you should either, doesn't seem to be a good use of time Be sensible is all you can do, don't send a million people through a doorway that is 2 feet wide ...
Originally Posted by Northie
It's pretty hard for me to say if you can see improvement, I'm not sure what it is you are doing, or how you are doing it, most importantly; I cannot tell from my desk what it restricting your activity.
Cutting edge opcode caching is performed by ZO+ and you should, definitely, research it. In addition to caching opcodes it actually performs some optimization (it's in the name) before caching or execution of anything. It's allocator is simpler and faster (but a bit hungrier, it's 2013; who cares, nobody is counting bytes anymore). In addition, when you use APCu and ZO+ you are no longer sharing a lock on opcode and user cache, which leads to improvement too ... I didn't really want to talk about opcode caching or APC(u)/ZO+, it's a whole nother subject.
What you should not do is drop everything because something new has come along. What you should do is get your feet wet, try it out, see what things you can do that you couldn't before, see where your application specifically could change for the better or what new things you can think up ...
Hope that's helpful, it's not very exact, I can't really answer the question directly in the way you wanted ... maybe someone else will have a stab at it in a way you understand, I don't think they should, but you shouldn't listen to what I think anyway ...
July 17th, 2013, 11:52 AM
Assuming this was directed towards me.
Originally Posted by Northie
I see concurrency in my application as a requirement because, typically, PHP web requests are run concurrently by Apache. But now I'll be funneling all of those requests into a single persistent application. If that can't perform concurrency to the same degree as Apache (and other entry points) then my application will become a bottleneck very quick.
So I'm not sure I need threads specifically for this (though they'll help spread the load) but I do need concurrency at a minimum. I just need to figure out if concurrency is possible within a single PHP process without threads.
July 17th, 2013, 12:15 PM
Not at all - just airing the thoughts as they come through my head.
Originally Posted by dmittner
I can see the need for concurrency, in order to process simultaneous requests....but looking at my setup for my projects then 99% of the requests are actually very granular and almost atomic; deliberately so to minimise resources. At the moment I don't think I need concurrency at a PHP level, as it's not PHP that's listening....I've already got it a server level with nginx.
For example, there's no point in selecting a resource from a database in one thread while querying the permissions in another thread as requests that are not permitted will not need the data from the database....thinking out loud now: concurrency would speed up valid requests but allow a consumption of resources for invalid requests...hummmm
July 17th, 2013, 07:15 PM
It's possible using multiple processes, but that's more resource intensive and more complicated to program than multi-threading. You also can't create an infinite number of processes, and creating too many processes can bring down the whole operating system as opposed to (probably) simply bringing down the application. So I wouldn't recommend that path.
I'm not aware of any general-use methods of asynchronous programming in PHP besides threads and processes. Some specific methods do have asynchronous modes (particularly those that issue network requests or read files), but those are not useful for executing arbitrary PHP code.
Your system would be able to support far more than 10 threads at a time. Dozens of threads should be no problem, probably even a few hundred would be OK. It depends a lot on your hardware.
Systems are normally designed to support one client per thread.
Yeah that logic is very sound. If your PHP backend is going to be acting as a persistent client/server it definitely needs concurrency.
What sort of a stack are you looking at on the server side? Do you have a normal web server executing PHP scripts which then interact with your backend, or is the client sending HTTP requests directly to your PHP application?
July 17th, 2013, 08:17 PM
It might all change up as I'm thinking through it, but the preliminary setup will be PHP front-end controllers taking the conventional HTTP requests, translating them to the server, getting a single response back, and printing out the headers/response.
Originally Posted by E-Oreo
I might split out separate endpoints for full HTML, partial AJAX HTML, JSON, etc. delivery formats.
After that will be WebSockets. I'll either have an intermediary script running for the duration of the connection, translating into the server, or have a secondary server logic flow to handle what's unique for WebSockets. Not sure yet.
July 17th, 2013, 11:35 PM
I know this isn't directly related to threading but if anyone's following my use case which will eventually require them (and provide an example of when PHP can make use of threading)...
Proof of concept is looking pretty nice. I expect some additional overhead when the server isn't hardcoded to loading one specific page, but:
Conventional request: ~750ms request time
Server via Endpoint: ~350ms request time
That's executing the very same controller. This confirms my earlier estimate that I had .4 seconds overhead per request, just loading all my class files and performing regular start up work.
A couple other notes:
- The CPU is probably ~7 years old, single core 2Ghz. Obviously a more modern CPU would cut load times as well. But still, this is cutting the load time almost in half. That's pretty neat and I imagine there would be a similar gain even if times are overall less.
- My application uses a very heavy active record pattern OO model, so there are a lot of files and a lot of code. The system overall is in the realm of 75k lines. So a lot of that overhead I'm trying to avoid is on account of that.
Anyway, just wanted to clarify the setup and use case a little, since it definitely wouldn't apply to every application and so the threaded server approach would often be overkill.
November 16th, 2015, 08:43 PM
I know this an old post, but if you are still around I'd be curious to know how you went with this.
Originally Posted by dmittner
I am doing a very similar thing. Initial HTML page, calls websocket server, a simple sub protocol shared, and the websocket server doing double duty talking to a socket client in one or more app instances. All to get the persistence you speak of and keep the app scope completely separate from the websocket server.
It's all working now with the exception of the multiple instances and it looks like php pthreads may be the go. In many ways it seems too good to be true, but from my research so far it looks fantastic. Time will tell.
November 16th, 2015, 09:30 PM
I'm sorry to say the project was put on the back burner due to changing jobs and priorities, but I do recall leaving off pretty hopeful that it could work. And unfortunately what I'm doing now wouldn't benefit much from the concepts so I haven't had the motivation to get back into it.
Originally Posted by Berniev
Best of luck in your project, though.