c# - Multi-tenancy: Individual database per tenant -


we developing multi-tenant application. respect architecture, have designed shared middle tier business logic , 1 database per tenant data persistence. saying that, business tier establish set of connections (connection pool) database server per tenant. means application maintain separate connection-pool each tenant. if expect around 5000 tenants, solution needs high resource utilization (connections between app server , database server per tenant), leads performance issue.

we have resolved keeping common connection pool. in order maintain single connection pool across different databases, have created new database called ‘app-master’. now, connect ‘app-master’ database first , change database tenant specific database. solved our connection-pool issue.

this solution works fine on-premise database server. not work azure sql not support change database.

appreciate in advance suggest how maintain connection pool or better approach / best practice deal such multi-tenant scenario.

i have seen problem before multiple tenancy schemes separate databases. there 2 overlapping problems; number of web servers per tenant, , total number of tenants. first bigger issue - if caching database connections via ado.net connection pooling likelihood of specific customer connection coming web server has open connection database inversely proportional number of web servers have. more scale out, more given customer notice per-call (not initial login) delay web server makes initial connection database on behalf. each call made non-sticky, highly scaled, web server tier decreasingly find existing open database connection can reused.

the second problem 1 of having many connections in pool, , likelihood of creating memory pressure or poor performance.

you can "solve" first problem establishing limited number of database application servers (simple wcf endpoints) carry out database communications on behalf of web server. each wcf database application server serves known pool of customer connections (eastern region go server a, western region go server b) means high likelihood of connection pool hit given request. allows scale access database separately access html rendering web servers (the database critical performance bottleneck might not bad thing).

a second solution use content specific routing via nlb router. these route traffic based on content , allow segment web server tier customer grouping (western region, eastern region etc) , each set of web servers therefore has smaller number of active connections corresponding increase in likelihood of getting open , unused connection.

both these problems issues caching generally, more scale out "unsticky" architecture, less likelihood call hit cached data - whether cached database connection, or read-cached data. managing user connections allow maximum likelihood of cache hit useful maintain high performance.


Comments

Popular posts from this blog

javascript - Jquery show_hide, what to add in order to make the page scroll to the bottom of the hidden field once button is clicked -

javascript - Highcharts multi-color line -

javascript - Enter key does not work in search box -