replacing shmem
Excuse my ignorance.
Is there a way to replace the shmem and sem (ipc.c) to use files. In this
way we can have a sort of a parallel server using GFS.
1.Starting a postmaster on ONE machine creates the shared structures, 2.then
start on other cluster-machines, the variuos machines share the same
datafiles and shared memory (on-file) structures
3.cliets connects (for example) round-robin to the clustered-postgres-nodes.
4.a problem is proper destroy and create of the shared structures
thanks,
valter
he Global File System (GFS [now in beta4]) is a shared disk cluster file
system for Linux. GFS supports journaling and recovery from client failures.
GFS cluster
nodes physically share the same
storage by means of Fibre Channel or shared SCSI devices. The file system
appears to be local on each node and GFS
synchronizes file access across the
cluster. GFS is fully symmetric, that is, all nodes are equal and there is
no server which may be a bottleneck or
single point of failure. GFS uses read
and write caching while maintaining full UNIX file system semantics.
_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.
"Valter Mazzola" <txian@hotmail.com> writes:
Is there a way to replace the shmem and sem (ipc.c) to use files. In this
way we can have a sort of a parallel server using GFS.
Unless GFS offers access bandwidth approaching main memory speeds, this
idea is sheer folly :-(
regards, tom lane
I suppose it won't help here to suggest using memory mapped I/O, because
someone will complain their platform doesn't support it. I wonder though if
there could be an optional patch to use mmap for all disk I/O, not just
shared memory!
- Andrew
On Sat, 6 Jan 2001, Valter Mazzola wrote:
Excuse my ignorance.
Is there a way to replace the shmem and sem (ipc.c) to use files. In this
way we can have a sort of a parallel server using GFS.
...
Besides the slowness of file IO, it also doesn't make sense to have a
shared memory area for all nodes. Not all pages in the shared memory area
are dirty pages, and it make no sense for all nodes to share the same read
cache, when they could have individual read caches. Besides, it is more
scalable.
Either way, an application level cluster employing two-phase commit is
likely going to be a lot more useful. FrontBase
(http://www.frontbase.com) supports this now, and on more platforms than
GFS supports. Besides, GFS is rather alpha right now.
Tom