I was very surprised to see your test result, it's must be too much overhead in your poolmanager to control the rendering and releasing of connections, which is not even synchronization related( since you have not run concurrency test, so threading issue is unknown), but I think it's PART of a valid test case.
Robbin's initial comment and some others' here maybe little bit harsh and off the topic. People tend to over-estimate the concurrency support requrement of connection pooling and over-engineering the implementation: connection normally come and go in sub-second or seconds, but each invocation is rarely measurable by millis. The phenomena given by Robbin is a classic threading issue and can be well applied to many resource management sections such as cache, queue or event listeners, but not really mean much to connection pool in general. As a matter of fact, all middle ware products have some kind of funnel styled setting to protect threads get thrashed.
My connection pool use a simple java.util.Stack, which supposed to perform horribly under heavier context switching ( There are many good articles, best one is Doug Lee's book), but in the past 7 years in use, it never become the hotspot as there are never that many concurrent accesses, even one pool is serving over 1000 active users.
Multiple threading test is necessary step, but if you already have problem with pool size expanding but not thread count increase, then it's more critical to be fixed.
BTW, keep over 100 connection in a pool seems quite high, true scalable app should not be designed to hold on resources for too long, I don't know who said that "Architecture is key to optimization"