It looks like you're new here. If you want to get involved, click one of these buttons!
In 2011, Intel launched Sandy Bridge, which was substantially better than anything that came before it. In 2012, they launched Ivy Bridge, which was a bit faster than Sandy at stock speeds, but wouldn't overclock as far, so they were basically tied if you overclocked both. In 2013, Intel released Haswell, which likewise was a little faster at stock speeds but wouldn't overclock quite as far. So a CPU released in 2011 is still as fast as it gets if you overclock everything. Furthermore, with 2014's Broadwell rumored to not come to desktops at all, or perhaps only in a crippled, low power form, Sandy Bridge could easily remain nearly tied with the high end until Sky Lake arrives well into 2015--or maybe 2016, with the way everything is getting delayed lately.
And it's far from clear that Sky Lake will improve single-threaded CPU performance, either. There's a huge emphasis on bringing CPU power consumption down, whether for laptops, tablets, data centers, or whatever. Process nodes can be tuned for higher performance or lower power consumption, but there are trade-offs. While x86 CPUs once were built heavily for higher performance, now there's more emphasis on lower power consumption, so it's far from clear that future process nodes will even be able to clock as high as current ones. There's also the issue that with CPU cores proper now taking up a very small percentage of total die space, even if you can dissipate a fixed amount of heat from the entire die, if where the heat is actually produced is very heterogeneous with most of the heat in a few tiny areas, that can still be a major pain to cool.
This isn't to say that we've reached the end on CPU performance period, of course. You can still add more CPU cores. But that only helps if code is designed to take advantage of more CPU cores, and that gets tricky for many programs once the core count gets high enough. In contrast, GPUs will still scale well to arbitrarily many shaders for the foreseeable future, and I see no reason to believe that GPU advances will slow down until Moore's Law dies. (Memory bandwidth is a big problem today, but that will get something of a reprieve at 14/16 nm with large L3 caches on the GPU die.)
I'm not willing to answer the question in the title with a "yes". The history of people predicting the end of technological advancements mostly features one wildly wrong prediction after another. But for the first time in the history of computers, it's a question to be taken seriously--and to get any further serious advancements might well require moving away from traditional silicon transistors to some other radical, new technology. Of course, the history of computers is one of doing that routinely, from solid state drives to LCD monitors to optical mice, to name a few of the relatively recent innovations.