PKŠýFÃÆT77storpool-2.0.0-nspkg.pthimport sys, types, os;p = os.path.join(sys._getframe(1).f_locals['sitedir'], *('storpool',));ie = os.path.exists(os.path.join(p,'__init__.py'));m = not ie and sys.modules.setdefault('storpool', types.ModuleType('storpool'));mp = (m or []) and m.__dict__.setdefault('__path__',[]);(p not in mp) and mp.append(p) PKŠýF}óØÞ|.|.)usr/share/doc/python-storpool/apidoc.html StorPool API Reference

StorPool API Reference

Copyright (c) 2014-2015 StorPool. All rights reserved.

This reference document describes the StorPool API version 15.02 and the supported API calls.

  1. General
  2. Peers
    1. List the network peers
  3. Tasks
    1. List tasks
  4. Services
    1. List all StorPool services
    2. List all blocked StorPool servers
  5. Servers
    1. List all disks on a server
    2. Describe a disk on a server
  6. Clients
    1. Get the current status of all the clients
    2. Wait until a client updates to the current configuration
    3. List all the active requests on a client
  7. AoE Targets
    1. Display AoE status
    2. Export a volume
    3. Unexport a volume
    4. Export a snapshot
    5. Unexport a snaphot
    6. List all active requests on an AoE target
  8. Disks
    1. List all disks
    2. Describe a disk
    3. Get disk info
    4. Eject a disk
    5. Forget a disk
    6. Ignore a disk
    7. Soft-eject a disk
    8. Pause a disk's soft-eject operation
    9. Cancel a disk's soft-eject operation
    10. Set a disk's description
    11. List all the active requests on a disk
  9. Volumes
    1. List all volumes
    2. Get volume and snapshot status
    3. List a single volume
    4. Describe a volume
    5. Get volume info
    6. List the parent snapshots of a volume
    7. Create a new volume
    8. Update a volume
    9. Freeze a volume
    10. Rebase a volume
    11. Abandon disk
    12. Delete a volume
  10. Snapshots
    1. List all snapshots
    2. List snapshots space estimations
    3. List a single snapshot
    4. Describe a snapshot
    5. Get snapshot info
    6. Snapshot a volume
    7. Update a snapshot
    8. Rebase a snapshot
    9. Abandon disk
    10. Delete a snapshot
  11. Attachments
    1. List all attachments
    2. Reassign volumes and/or snapshots
  12. Placement Groups
    1. List all placement groups
    2. Describe a single placement group
    3. Create and/or update a placement group
    4. Delete a placement group
  13. Volume Templates
    1. List all volume templates
    2. List the status of all volume templates
    3. Describe a single volume template
    4. Create a volume template
    5. Update a volume template
    6. Delete a volume template
  14. Volume Relocator
    1. Turn the relocator on
    2. Turn the relocator off
    3. Get the relocator's status
    4. List total per disk relocation estimates
    5. List per disk relocation estimates for a given volume
    6. List per disk relocation estimates for a given snapshot
  15. Balancer
    1. Get the balancer's status
    2. Set the balancer's status
    3. List balancer volume and snapshot status
    4. List total per disk rebalancing estimates
    5. List per disk rebalancing estimated for a given volume
    6. List per disk rebalancing estimates for a given snapshot
    7. Get the disk sets computed by the balancer for a given volume
    8. Get the disk sets computed by the balancer for a given snapshot
    9. List balancer allocation groups
  16. Data Types

General

The StorPool API can be used with any tool that can generate HTTP requests with the GET and POST methods. The only requirement is to supply the Authorization header and, if required by the request, valid JSON data.

For each call there is an explanation of the HTTP request and response and an example in raw format as it should be sent to the StorPool management service.

Here are two examples using curl using the GET and POST methods respectively and their counterparts as issued by the StorPool CLI:

curl -H "Authorization: Storpool v1:1556129910218014736" 192.168.42.208:81/ctrl/1.0/DisksList
storpool disk list

curl -d '{"addDisks":["1"]}' -H "Authorization: Storpool v1:1556129910218014736" 192.168.42.208:81/ctrl/1.0/PlacementGroupUpdate/hdd
storpool placementGroup hdd addDisk 1

Python programs may use the API by importing the Python StorPool bindings (use 'pypi install storpool' to install them):

>>>import spapi
>>>api=spapi.Api('192.168.0.5', 80, '1556560560218011653')
>>>a.peersList()

{
  1: {
       'networks': {
         0: {
          'mac': '00:4A:E6:5F:34:C3'
         }
       }
  },
  2: {
       'networks': {
         0: {
          'mac': '52:54:E6:5F:34:DF'
         }
       }
  },
  3: {
        'networks': {
          0: {
           'mac': '52:57:5F:54:E6:3A'
          }
        }
  }
}

The calls that may be used may be found in the file spapi.py

Note: Requests will sometimes use GET instead of POST and consequently, will not require JSON. Responses on the other hand always produce JSON content.

Peers

List the network peers (NetworkPeersList)

List the network nodes running the StorPool beacon including information such as the ID of the node, the networks it communicates through and the corresponding MAC addresses.

  1. Request:
  2. Response:

Tasks

List tasks (TasksList)

List the currently active recovery tasks. This call will return JSON data only when there is a relocation in progress. Under normal operation of the cluster it will return no data.

  1. Request:
  2. Response:

Services

List all StorPool services (ServicesList)

List all the services in the cluster (StorPool servers, clients, management, etc). If the whole cluster is not operational this call will return an error.

  1. Request:
  2. Response:

List all blocked StorPool servers (ServersListBlocked)

List the currently active StorPool servers even before the cluster has become operational, along with information about any missing disks that the cluster is waiting for.

  1. Request:
  2. Response:

Servers

List all disks on a server (ServerDisksList)

Return detailed information about each disk on the given server.

  1. Request:
  2. Response:

Describe a disk on a server (ServerDiskDescribe)

Return detailed information about a disk on the given server and the objects on it.

  1. Request:
  2. Response:

Clients

Get the current status of all the clients (ClientsConfigDump)

Return the status of each client including its current generation and generation update status.

  1. Request:
  2. Response:

Wait until a client updates to the current configuration (ClientConfigWait)

Return the same JSON as ClientsConfigDump but block until the client has updated its configuration information to the current generation at the time of the request.

  1. Request:
  2. Response:

List all the active requests on a client (ClientActiveRequests)

List detailed information about the requests being currently processed on the given client.

  1. Request:
  2. Response:

AoE Targets

Display AoE status (AoeStatus)

List the StorPool volumes and snapshots exported over AoE.

  1. Request:
  2. Response:

Export a volume (AoeExportVolume)

Export the specified volume over AoE.

  1. Request:
  2. Response:

Unexport a volume (AoeExportSnapshot)

Export the specified snapshot over AoE.

  1. Request:
  2. Response:

Export a snapshot (AoeUnexportVolume)

Stop exporting the specified volume over AoE.

  1. Request:
  2. Response:

Unexport a snaphot (AoeUnexportSnapshot)

Stop exporting the specified snapshot over AoE.

  1. Request:
  2. Response:

List all active requests on an AoE target (AoeTargetActiveRequests)

List detailed information about the requests being currently processed on the given AoE target

  1. Request:
  2. Response:

Disks

List all disks (DisksList)

  1. Request:
  2. Response:

Describe a disk (DiskDescribe)

List all disks including detailed information about the objects on each disk.

  1. Request:
  2. Response:

Get disk info (DiskGetInfo)

List all disks including information about the volumes stored on each disk.

  1. Request:
  2. Response:

Eject a disk (DiskEject)

Stop operations on the given disk even if it is not empty.

  1. Request:
  2. Response:

Forget a disk (DiskForget)

Remove the disk from any placement groups or volumes that it is used in.

  1. Request:
  2. Response:

Ignore a disk (DiskIgnore)

Try to boot the cluster by ignoring this disk.

  1. Request:
  2. Response:

Soft-eject a disk (DiskSoftEject)

Stop writes to the given disk and start relocating all the data stored on it to other disks.

  1. Request:
  2. Response:

Pause a disk's soft-eject operation (DiskSoftEjectPause)

Temporarily pause the relocation tasks for the disk. This can be helpful in heavy load situations.

  1. Request:
  2. Response:

Cancel a disk's soft-eject operation (DiskSoftEjectCancel)

Stop the relocation tasks for the disk and mark it as usable again. After this operation data will be moved back to the disk.

  1. Request:
  2. Response:

Set a disk's description (DiskSetDescription)

  1. Request:
  2. Response:

List all the active requests on a disk (DiskActiveRequests)

List detailed information about the requests being currently processed on the given disk.

  1. Request:
  2. Response:

Volumes

List all volumes (VolumesList)

Return configuration information about all the volumes.

  1. Request:
  2. Response:

Get volume and snapshot status (VolumesGetStatus)

Return the status of each volume and snapshot.

  1. Request:
  2. Response:

List a single volume (Volume)

Same as VolumeList but only return information about a given volume.

  1. Request:
  2. Response:

Describe a volume (VolumeDescribe)

Return detailed information about the distribution of the volume's data on the disks.

  1. Request:
  2. Response:

Get volume info (VolumeGetInfo)

Return general information about the distribution of the volume's data on the disks.

  1. Request:
  2. Response:

List the parent snapshots of a volume (VolumeListSnapshots)

List a volume's parent snapshots in the same format as VolumeList

  1. Request:
  2. Response:

Create a new volume (VolumeCreate)

  1. Request:
  2. Response:

Update a volume (VolumeUpdate)

Alter the configuration of an existing volume.

  1. Request:
  2. Response:

Freeze a volume (VolumeFreeze)

Convert the volume to a snapshot

  1. Request:
  2. Response:

Rebase a volume (VolumeRebase)

Change the parent of the volume by choosing from the ones higher in the hierarchy or by rebasing it to no parent.

  1. Request:
  2. Response:

Abandon disk (VolumeAbandonDisk)

  1. Request:
  2. Response:

Delete a volume (VolumeDelete)

  1. Request:
  2. Response:

Snapshots

Snapshots in their essence are very similar to volumes in the sense that many operations supported by volumes are also supported by snapshots (all except write-related operations). They can not be modified and play an essential role in copy-on-write scenarios.

List all snapshots (SnapshotsList)

List all the snapshots in the cluster in the same format as VolumeList.

  1. Request:
  2. Response:

List snapshots space estimations (SnapshotsSpace)

List estimated virtual space used by each snapshot.

  1. Request:
  2. Response:

List a single snapshot (Snapshot)

Same as SnapshotList but only return information about a given snapshot.

  1. Request:
  2. Response:

Describe a snapshot (SnapshotDescribe)

Return detailed information about the distribution of the snapshot's data on the disks.

  1. Request:
  2. Response:

Get snapshot info (SnapshotGetInfo)

Return general information about the distribution of the snapshot's data on the disks.

  1. Request:
  2. Response:

Snapshot a volume (VolumeSnapshot)

Create a snapshot of the given volume; the snapshot becomes the parent of the volume.

  1. Request:
  2. Response:

Update a snapshot (SnapshotUpdate)

Alter the configuration of an existing snapshot.

  1. Request:
  2. Response:

Rebase a snapshot (SnapshotRebase)

Change the parent of the snapshot by choosing from the ones higher in the hierarchy or by rebasing it to no parent.

  1. Request:
  2. Response:

Abandon disk (VolumeAbandonDisk)

  1. Request:
  2. Response:

Delete a snapshot (SnapshotDelete)

  1. Request:
  2. Response:

Attachments

List all attachments (AttachmentsList)

List the volumes and snapshots currently attached to clients along with the read/write rights of each attachment.

  1. Request:
  2. Response:

Reassign volumes and/or snapshots (VolumesReassign)

Perform bulk attach/detach and attachment rights modification.

  1. Request:
  2. Response:

Placement Groups

Placement groups provide a way to specify the disks on which a volume's data should be stored.

List all placement groups (PlacementGroupsList)

  1. Request:
  2. Response:

Describe a single placement group (PlacementGroupDescribe)

Same as PlacementGroupsList but only return information about a given group.

  1. Request:
  2. Response:

Create and/or update a placement group (PlacementGroupUpdate)

If a group by the specified name does not exist, it will be created.

  1. Request:
  2. Response:

Delete a placement group (PlacementGroupDelete)

  1. Request:
  2. Response:

Volume Templates

Templates are a set of rules used for creating many similar volumes.

List all volume templates (VolumeTemplatesList)

  1. Request:
  2. Response:

List the status of all volume templates (VolumeTemplatesStatus)

  1. Request:
  2. Response:

Describe a single volume template (VolumeTemplateDescribe)

Same as VolumeTemplatesList but only return information about a given template.

  1. Request:
  2. Response:

Create a volume template (VolumeTemplateCreate)

  1. Request:
  2. Response:

Update a volume template (VolumeTemplateUpdate)

Alter the configuration of an existing volume template.

  1. Request:
  2. Response:

Delete a volume template (VolumeTemplateDelete)

  1. Request:
  2. Response:

Volume Relocator

This is a service that moves data when needed, e.g. when removing or adding disks.

Turn the relocator on (VolumeRelocatorOn)

  1. Request:
  2. Response:

Turn the relocator off (VolumeRelocatorOff)

  1. Request:
  2. Response:

Get the relocator's status (VolumeRelocatorStatus)

  1. Request:
  2. Response:

List total per disk relocation estimates (VolumeRelocatorDisksList)

  1. Request:
  2. Response:

List per disk relocation estimates for a given volume (VolumeRelocatorVolumeDisks)

  1. Request:
  2. Response:

List per disk relocation estimates for a given snapshot (VolumeRelocatorSnapshotDisks)

  1. Request:
  2. Response:

Balancer

This is a service that decides when it is a good time to move data.

Get the balancer's status (VolumeBalancerStatus)

  1. Request:
  2. Response:

Set the balancer's status (VolumeBalancerStatus)

  1. Request:
  2. Response:

List balancer volume and snapshot status (VolumeBalancerVolumesStatus)

Show which volumes and snapshots will be reallocated by the current balancer run.

  1. Request:
  2. Response:

List total per disk rebalancing estimates (VolumeBalancerDisksList)

  1. Request:
  2. Response:

List per disk rebalancing estimated for a given volume (VolumeBalancerVolumeDisks)

  1. Request:
  2. Response:

List per disk rebalancing estimates for a given snapshot (VolumeBalancerSnapshotDisks)

  1. Request:
  2. Response:

Get the disk sets computed by the balancer for a given volume (VolumeBalancerVolumeDiskSets)

  1. Request:
  2. Response:

Get the disk sets computed by the balancer for a given snapshot (VolumeBalancerSnapshotDiskSets)

  1. Request:
  2. Response:

List balancer allocation groups (VolumeBalancerGroups)

  1. Request:
  2. Response:

Data Types

"":The constant value "".
"-":The constant value "-".
"all":The constant value "all".
-1:The constant value -1.
0:The constant value 0.
AoeExportStatus:One of {"OK", "down"}
AoeTargetID:integer, 1 <= value <= 4095
AttachmentPos:integer, 0 <= value <= 1023
AttachmentRights:One of {"rw", "ro"}
BalancerCommand:One of {"start", "stop", "commit", "auto"}
BalancerStatus:One of {"nothing to do", "blocked", "waiting", "working", "ready", "commiting"}
Bandwidth:a positive integer or '-' for unlimited
ClientID:integer, 1 <= value <= 24575
ClientStatus:One of {"running", "down"}
ClusterStatus:One of {"running", "waiting", "down"}
DiskDescritpion:string, regex ^[A-Za-z0-9_\- ]{,30}$
DiskID:integer, 0 <= value <= 4095
DiskSoftEjectStatus:One of {"on", "off", "paused"}
IOPS:a positive integer or '-' for unlimited
MAC Address:string, regex ^([0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$
MgmtID:integer, 1 <= value <= 4095
NetID:integer, 0 <= value <= 3
NodeID:integer, 0 <= value <= 63
ObjectState:ObjectState, enumeration from 0 to 9
PeerID:integer, 0 <= value <= 65535
PeerStatus:One of {"up", "down"}
PlacementGroupName:a string(128), matching ^[A-Za-z0-9_\-]+$, except {list}
RelocatorStatus:One of {"on", "off", "blocked"}
Replication:integer, 1 <= value <= 3
RequestOp:One of {"read", "write", "merge", "system", "entries flush", "#bad_state", "#bad_drOp"}
ServerID:integer, 1 <= value <= 32767
ServerStatus:One of {"running", "waiting", "booting", "down"}
Size:a positive integer divisible by 512
SizeAdd:a positive integer divisible by 512
SnapshotName:a string(200), matching ^\*?[A-Za-z0-9_\-.:@]+$, except {list, status}
VolumeCurentStatus:One of {"up", "up soon", "data lost", "down"}
VolumeName:a string(200), matching ^\#?[A-Za-z0-9_\-.:]+$, except {list, status}
VolumeTemplateName:a string(200), matching ^[A-Za-z0-9_\-]+$, except {list}
bool:true or false.
bool, default=false:A value of type bool. Default value = False.
client status:One of {"ok", "updating", "down"}
int:An integer value.
long:A long integer value.
null:The constant value null.
string:A string value.
true:The constant value true.
PKR¼FzÅs%%storpool/spdoc.py# #- # Copyright (c) 2014, 2015 StorPool. # All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # class Html(object): escapeTable = { "&": "&", '"': """, "'": "'", ">": ">", "<": "<", } def __init__(self): self.buf = "" def add(self, fmt, *args, **kwargs): args = map(self.escape, args) kwargs = dict((k, self.escape(v)) for k, v in kwargs.iteritems()) self.buf += fmt.format(*args, **kwargs) return self def back(self, count): self.buf = self.buf[:-count] return self def escape(self, text): return "".join(self.escapeTable.get(c,c) for c in text) def __str__(self): return self.buf class Doc(object): """ Base documentation entity class """ def __init__(self, name, desc): self.name = name.strip() self.desc = desc.strip() class TypeDoc(Doc): types = {} @classmethod def buildTypes(cls, html): html.add('

Data Types

\n') html.add('\n') for name, doc in sorted(TypeDoc.types.iteritems()): html.add('\n', n=name, d=doc.desc) html.add('
{n}:{d}
\n') def __init__(self, name, desc, deps=[]): super(TypeDoc, self).__init__(name, desc) self.deps = deps if not self.deps and self.name not in TypeDoc.types: TypeDoc.types[self.name] = self def attrList(self, html): if self.deps: assert len(self.deps) == 1 dep = self.deps[0] html.add('{name} {dep}', name=self.name, dep=dep.name) else: html.add('{name}', name=self.name) def toJson(self, html, pad): if self.deps: assert len(self.deps) == 1 dep = self.deps[0] dep.toJson(html, pad) html.add(' /* {0} */', self.name) else: html.add('{0}', self.name) class EitherDoc(TypeDoc): def attrList(self, html): html.add("{0}\n", self.desc) html.add('\n') def toJson(self, html, pad): html.add('Either(') for st in self.deps: st.toJson(html, pad) html.add(', ') html.back(2).add(')') class ListDoc(TypeDoc): def attrList(self, html): valT, = self.deps #html.add('{0}\n', self.desc) html.add('\n') def toJson(self, html, pad): valT, = self.deps html.add('[') valT.toJson(html, pad) html.add(', ...]') class DictDoc(TypeDoc): def attrList(self, html): keySt, valSt = self.deps html.add("{0}\n", self.desc) html.add('\n') def toJson(self, html, pad): keySt, valSt = self.deps html.add('{{\n') html.add('{pad}"', pad=' ' * (pad + 2)) keySt.toJson(html, pad + 2) html.add('": ') valSt.toJson(html, pad + 2) html.add(', ...\n') html.add('{pad}}}', pad=' ' * (pad)) class JsonObjectDoc(Doc): def __init__(self, name, desc, attrs): super(JsonObjectDoc, self).__init__(name, desc) self.attrs = attrs def attrList(self, html): html.add('{name}', name=self.name) html.add('\n') def toJson(self, html, pad): html.add('{{\n') for attrName, (attrType, attrDesc) in sorted(self.attrs.iteritems()): html.add('{pad}"{attr}": ', pad=' ' * (pad + 2), attr=attrName) attrType.toJson(html, pad + 2) html.add(',\n') html.back(2).add('\n{pad}}}', pad=' ' * pad) class ApiCallDoc(Doc): def __init__(self, name, desc, method, path, args, json, returns): if not name: name = "XXX Missing title." super(ApiCallDoc, self).__init__(name, desc) self.method = method self.path = path self.args = args self.json = json self.returns = returns self.query = path.split("/")[3] def index(self, html): html.add('
  • {name}
  • \n', name=self.name, query=self.query) def build(self, html): html.add('

    {name} ({query})

    \n', name=self.name, query=self.query) if self.desc: html.add("

    {0}

    \n", self.desc) html.add('
      ') html.add('
    1. Request:\n', query=self.path.split("/")[3]) html.add('\n') html.add('
    2. \n') html.add('
    3. Response:\n') html.add('\n') html.add('
    4. \n') html.add('
    \n') class DocSection(Doc): """ Description for API and API sections""" def buildDesc(self, html): currentParagraph = [] isCode = False preSpaces = 0 for line in self.desc.split('\n'): #print line.strip() if isCode: if line and line.strip() == '```': html.add('\n') isCode = False else: html.add('{0}\n', line[preSpaces:]) else: if line and line.strip() == '```': html.add('

    {0}

    \n', "\n".join(currentParagraph)) currentParagraph = [] html.add('
    ')
    					preSpaces = len(line) - 3
    					isCode = True
    				elif line.strip():
    					currentParagraph.append(line.strip())
    				else:
    					html.add('

    {0}

    \n', "\n".join(currentParagraph)) currentParagraph = [] if( len(currentParagraph) > 0 ): html.add('

    {0}

    \n', "\n".join(currentParagraph)) currentParagraph = [] class ApiSectionDoc(DocSection): """ Doc. section for related API calls """ def __init__(self, name, desc): super(ApiSectionDoc, self).__init__(name, desc) self.id = name.replace(' ', '-') self.calls = [] def index(self, html): html.add('
  • {1}
  • \n', self.id, self.name) html.add('
      \n') for call in self.calls: call.index(html) html.add('
    \n') def build(self, html): html.add('

    {1}

    \n', self.id, self.name) self.buildDesc(html) for call in self.calls: call.build(html) class ApiDoc(DocSection): """ API documentation holder """ def __init__(self, title, desc): super(ApiDoc, self).__init__(title, desc) self.sections = [] self.currentSection = None def addSection(self, name, desc): self.currentSection = ApiSectionDoc(name, desc) self.sections.append(self.currentSection) def addCall(self, call): self.currentSection.calls.append(call) def build(self, html): html.add("

    {0}

    \n", self.name) self.buildDesc(html) html.add('
      \n') for sect in self.sections: sect.index(html) html.add('
    1. Data Types
    2. \n') html.add('
    \n') for sect in self.sections: sect.build(html) TypeDoc.buildTypes(html) if __name__ == '__main__': from spapi import Api html = Html() Api.spDoc.build(html) with open('ApiDoc.html.template') as tmpl: for line in tmpl.read().split('\n'): if line == '__DOC__': print html else: print line PK°‰ýFÓ¼ÍË@\@\storpool/spapi.py# #- # Copyright (c) 2014, 2015 StorPool. # All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import httplib as http import json as js import spjson as js import sptypes as sp from sputils import msec, sec, pathPollWait, spType, either, const from spdoc import ApiDoc, ApiCallDoc SP_DEV_PATH = '/dev/storpool/' SP_API_PREFIX = '/ctrl/1.0' class _API_ARG(object): def __init__(self, name, validate): self._name = name self._type = spType(validate) DiskId = _API_ARG('diskId', sp.DiskId) ServerId = _API_ARG('serverId', sp.ServerId) ClientId = _API_ARG('clientId', sp.ClientId) AoeTargetId = _API_ARG('aoeTargetId', sp.AoeTargetId) VolumeName = _API_ARG('volumeName', sp.VolumeName) SnapshotName = _API_ARG('snapshotName', sp.SnapshotName) PlacementGroupName = _API_ARG('placementGroupName', sp.PlacementGroupName) VolumeTemplateName = _API_ARG('templateName', sp.VolumeTemplateName) class _API_METHOD(object): def __init__(self, method, query, args, json, returns): self.method = method self.path = "{pref}/{query}".format(pref=SP_API_PREFIX, query=query) self.args = args self.json = spType(json) if json is not None else None self.returns = spType(returns) self.types = {} def addType(self, name, desc): self.types.update({name: desc}) def doc(self, name, desc): self.spDoc = ApiCallDoc(name, desc, self.method, self.path, dict((arg._name, arg._type.spDoc) for arg in self.args), self.json.spDoc if self.json else None, self.returns.spDoc) return self def compile(self): method, path, args, json, returns = self.method, self.path, self.args, self.json, self.returns args = list(args) if json is not None: args.append(_API_ARG('json', json)) commas = lambda xs: ", ".join(xs) fmtEq = lambda x: "{x}={x}".format(x=x) ftext = 'def func(self, {args}):\n'.format(args=commas(arg._name for arg in args)) for arg in args: ftext += ' {arg} = _validate_{arg}({arg})\n'.format(arg=arg._name) ftext += ' path = "{path}"'.format(path=path) if args: ftext += '.format({args})\n'.format(args=commas(fmtEq(arg._name) for arg in args)) ftext += '\n' # ftext += ' print "Query: {0}".format(path)\n' ftext += ' res = self("{method}", path, {json})\n'.format(method=method, json=None if json is None else 'json') ftext += ' return returns(res)' # print ftext globalz = dict(("_validate_{0}".format(arg._name), arg._type.handleVal) for arg in args) globalz['returns'] = returns.handleVal exec ftext in globalz func = globalz['func'] del globalz['func'] doc = "HTTP: {method} {path}\n\n".format(method=method, path=path) if args: doc += " Arguments:\n" for arg in args: doc += " {argName}: {argType}\n".format(argName=arg._name, argType=arg._type.name) doc += "\n" if returns is not None: doc += " Returns: {res}\n".format(res=returns.name) # print doc func.__doc__ = doc func.spDoc = self.spDoc return func def GET(query, *args, **kwargs): assert 'json' not in kwargs, 'GET requests currently do not accept JSON objects' assert 'returns' in kwargs, 'GET requests must specify a return type' return _API_METHOD('GET', query, args, None, kwargs['returns']) def POST(query, *args, **kwargs): return _API_METHOD('POST', query, args, kwargs.get('json', None), kwargs.get('returns', ApiOk)) @js.JsonObject(ok=const(True), generation=long) class ApiOk(object): ''' ok: Always returns true. If something goes wrong, an ApiError is returned instead. generation: The cluster generation based on the number of configuration changes since the cluster was created. ''' @js.JsonObject(autoName=sp.maybe(sp.SnapshotName)) class ApiOkVolumeCreate(ApiOk): ''' autoName: The name of the transient snapshot used during the creation of the volume. ''' class ApiError(Exception): def __init__(self, status, json): super(ApiError, self).__init__() self.status = status self.json = json self.name = json['error'].get('name', "") self.desc = json['error'].get('descr', "") def __str__(self): return "{0}: {1}".format(self.name, self.desc) class ApiMeta(type): def spDocSection(cls, name, desc): cls.spDoc.addSection(name, desc) def __setattr__(cls, name, func): cls.spDoc.addCall(func.spDoc) func = func.compile() func.__name__ = func.func_name = name func.__module__ = __name__ type.__setattr__(cls, name, func) class Api(object): '''StorPool API abstraction''' __metaclass__ = ApiMeta spDoc = ApiDoc( """StorPool API Reference""", """ Copyright (c) 2014-2015 StorPool. All rights reserved. This reference document describes the StorPool API version 15.02 and the supported API calls. """ ) def __init__(self, host='127.0.0.1', port=80, auth='', timeout=10): # print host, port, auth self._host = host self._port = port self._timeout = timeout self._authHeader = {"Authorization": "Storpool v1:" + str(auth)} def __call__(self, method, path, json=None): if json is not None: json = js.dumps(json) try: conn = http.HTTPConnection(self._host, self._port, self._timeout) request = conn.request(method, path, json, self._authHeader) response = conn.getresponse() status, json = response.status, js.load(response) if status != http.OK or 'error' in json: # print status, json raise ApiError(status, json) else: # print json return json['data'] finally: conn.close() def volumeDevLinkWait(self, volumeName, attach, pollTime=200*msec, maxTime=60*sec): return pathPollWait(SP_DEV_PATH + volumeName, attach, True, pollTime, maxTime) Api.spDocSection("General", """ The StorPool API can be used with any tool that can generate HTTP requests with the GET and POST methods. The only requirement is to supply the Authorization header and, if required by the request, valid JSON data. For each call there is an explanation of the HTTP request and response and an example in raw format as it should be sent to the StorPool management service. Here are two examples using curl using the GET and POST methods respectively and their counterparts as issued by the StorPool CLI: ``` curl -H "Authorization: Storpool v1:1556129910218014736" 192.168.42.208:81/ctrl/1.0/DisksList storpool disk list ``` ``` curl -d '{"addDisks":["1"]}' -H "Authorization: Storpool v1:1556129910218014736" 192.168.42.208:81/ctrl/1.0/PlacementGroupUpdate/hdd storpool placementGroup hdd addDisk 1 ``` Python programs may use the API by importing the Python StorPool bindings (use 'pypi install storpool' to install them): ``` >>>import spapi >>>api=spapi.Api('192.168.0.5', 80, '1556560560218011653') >>>a.peersList() { 1: { 'networks': { 0: { 'mac': '00:4A:E6:5F:34:C3' } } }, 2: { 'networks': { 0: { 'mac': '52:54:E6:5F:34:DF' } } }, 3: { 'networks': { 0: { 'mac': '52:57:5F:54:E6:3A' } } } } ``` The calls that may be used may be found in the file spapi.py Note: Requests will sometimes use GET instead of POST and consequently, will not require JSON. Responses on the other hand always produce JSON content. """ ) Api.spDocSection("Peers", """ """) Api.peersList = GET('NetworkPeersList', returns={sp.PeerId: sp.PeerDesc}).doc("List the network peers", """ List the network nodes running the StorPool beacon including information such as the ID of the node, the networks it communicates through and the corresponding MAC addresses. """ ) Api.spDocSection("Tasks", """ """) Api.tasksList = GET('TasksList', returns=[sp.Task]).doc("List tasks", """ List the currently active recovery tasks. This call will return JSON data only when there is a relocation in progress. Under normal operation of the cluster it will return no data. """ ) Api.spDocSection("Services", """ """) Api.servicesList = GET('ServicesList', returns=sp.ClusterStatus).doc("List all StorPool services", """ List all the services in the cluster (StorPool servers, clients, management, etc). If the whole cluster is not operational this call will return an error. """ ) Api.serversListBlocked = GET('ServersListBlocked', returns=sp.ClusterStatus).doc("List all blocked StorPool servers", """ List the currently active StorPool servers even before the cluster has become operational, along with information about any missing disks that the cluster is waiting for. """ ) Api.spDocSection("Servers", """ """) #Api.serversList = GET('ServersList', returns=sp.ClusterStatus).doc("List all Storpool servers", # """ # Returns the the same output as ServicesList but ommits clients. Returns # an error if the whole cluster is not operational. # """ # ) Api.serverDisksList = GET('ServerDisksList/{serverId}', ServerId, returns={sp.DiskId: sp.DiskSummary}).doc("List all disks on a server", """ Return detailed information about each disk on the given server. """ ) Api.serverDiskDescribe = GET('ServerDiskDescribe/{serverId}/{diskId}', ServerId, DiskId, returns=sp.Disk).doc("Describe a disk on a server", """ Return detailed information about a disk on the given server and the objects on it. """ ) Api.spDocSection("Clients", """ """) Api.clientsConfigDump = GET('ClientsConfigDump', returns=[sp.ClientConfigStatus]).doc("Get the current status of all the clients", """ Return the status of each client including its current generation and generation update status. """ ) Api.clientConfigWait = GET('ClientConfigWait/{clientId}', ClientId, returns=[sp.ClientConfigStatus]).doc("Wait until a client updates to the current configuration", """ Return the same JSON as ClientsConfigDump but block until the client has updated its configuration information to the current generation at the time of the request. """ ) Api.clientActiveRequests = GET('ClientActiveRequests/{clientId}', ClientId, returns=sp.ClientActiveRequests).doc("List all the active requests on a client", """ List detailed information about the requests being currently processed on the given client. """ ) Api.spDocSection("AoE Targets", """ """) Api.aoeStatus = GET('AoeStatus', returns=[sp.AoeExport]).doc("Display AoE status", """ List the StorPool volumes and snapshots exported over AoE. """ ) Api.aoeExportVolume = POST('AoeExportVolume/{volumeName}', VolumeName).doc("Export a volume", """ Export the specified volume over AoE. """ ) Api.aoeExportSnapshot = POST('AoeExportSnapshot/{snapshotName}', SnapshotName).doc("Unexport a volume", """ Export the specified snapshot over AoE. """ ) Api.aoeUnexportVolume = POST('AoeUnexportVolume/{volumeName}', VolumeName).doc("Export a snapshot", """ Stop exporting the specified volume over AoE. """ ) Api.aoeUnexportSnapshot = POST('AoeUnexportSnapshot/{snapshotName}', SnapshotName).doc("Unexport a snaphot", """ Stop exporting the specified snapshot over AoE. """ ) Api.aoeTargetActiveRequests = GET('AoeTargetActiveRequests/{aoeTargetId}', AoeTargetId, returns=sp.AoeTargetActiveRequests).doc("List all active requests on an AoE target", """ List detailed information about the requests being currently processed on the given AoE target """ ) Api.spDocSection("Disks", """ """) Api.disksList = GET('DisksList', returns={sp.DiskId: sp.DiskSummary}).doc("List all disks", """ """) Api.diskDescribe = GET('DiskDescribe/{diskId}', DiskId, returns=sp.Disk).doc("Describe a disk", """ List all disks including detailed information about the objects on each disk. """ ) Api.diskInfo = GET('DiskGetInfo/{diskId}', DiskId, returns=sp.DiskInfo).doc("Get disk info", """ List all disks including information about the volumes stored on each disk. """ ) Api.diskEject = POST('DiskEject/{diskId}', DiskId).doc("Eject a disk", """ Stop operations on the given disk even if it is not empty. """ ) Api.diskForget = POST('DiskForget/{diskId}', DiskId).doc("Forget a disk", """ Remove the disk from any placement groups or volumes that it is used in. """ ) Api.diskIgnore = POST('DiskIgnore/{diskId}', DiskId).doc("Ignore a disk", """ Try to boot the cluster by ignoring this disk. """ ) Api.diskSoftEject = POST('DiskSoftEject/{diskId}', DiskId).doc("Soft-eject a disk", """ Stop writes to the given disk and start relocating all the data stored on it to other disks. """ ) Api.diskSoftEjectPause = POST('DiskSoftEjectPause/{diskId}', DiskId).doc("Pause a disk's soft-eject operation", """ Temporarily pause the relocation tasks for the disk. This can be helpful in heavy load situations. """ ) Api.diskSoftEjectCancel = POST('DiskSoftEjectCancel/{diskId}', DiskId).doc("Cancel a disk's soft-eject operation", """ Stop the relocation tasks for the disk and mark it as usable again. After this operation data will be moved back to the disk. """ ) Api.diskSetDesc = POST('DiskSetDescription/{diskId}', DiskId, json=sp.DiskDescUpdate).doc("Set a disk's description", """ """) Api.diskActiveRequests = GET('DiskActiveRequests/{diskId}', DiskId, returns=sp.DiskActiveRequests).doc("List all the active requests on a disk", """ List detailed information about the requests being currently processed on the given disk. """ ) Api.spDocSection("Volumes", """ """) Api.volumesList = GET('VolumesList', returns=[sp.VolumeSummary]).doc("List all volumes", """ Return configuration information about all the volumes. """ ) Api.volumesStatus = GET('VolumesGetStatus', returns={either(sp.VolumeName, sp.SnapshotName): sp.VolumeStatus}).doc("Get volume and snapshot status", """ Return the status of each volume and snapshot. """ ) Api.volumeList = GET('Volume/{volumeName}', VolumeName, returns=[sp.VolumeSummary]).doc("List a single volume", """ Same as VolumeList but only return information about a given volume. """ ) Api.volumeDescribe = GET('VolumeDescribe/{volumeName}', VolumeName, returns=sp.Volume).doc("Describe a volume", """ Return detailed information about the distribution of the volume's data on the disks. """ ) Api.volumeInfo = GET('VolumeGetInfo/{volumeName}', VolumeName, returns=sp.VolumeInfo).doc("Get volume info", """ Return general information about the distribution of the volume's data on the disks. """ ) Api.volumeListSnapshots = GET('VolumeListSnapshots/{volumeName}', VolumeName, returns=[sp.SnapshotSummary]).doc("List the parent snapshots of a volume", """ List a volume's parent snapshots in the same format as VolumeList """ ) Api.volumeCreate = POST('VolumeCreate', json=sp.VolumeCreateDesc, returns=ApiOkVolumeCreate).doc("Create a new volume", """ """) Api.volumeUpdate = POST('VolumeUpdate/{volumeName}', VolumeName, json=sp.VolumeUpdateDesc).doc("Update a volume", """ Alter the configuration of an existing volume. """ ) Api.volumeFreeze = POST('VolumeFreeze/{volumeName}', VolumeName).doc("Freeze a volume", """ Convert the volume to a snapshot """ ) Api.volumeRebase = POST('VolumeRebase/{volumeName}', VolumeName, json=sp.VolumeRebaseDesc).doc("Rebase a volume", """ Change the parent of the volume by choosing from the ones higher in the hierarchy or by rebasing it to no parent. """ ) Api.volumeAbandonDisk = POST('VolumeAbandonDisk/{volumeName}', VolumeName, json=sp.AbandonDiskDesc).doc("Abandon disk", """ """ ) Api.volumeDelete = POST('VolumeDelete/{volumeName}', VolumeName).doc("Delete a volume", """ """) Api.spDocSection("Snapshots", """ Snapshots in their essence are very similar to volumes in the sense that many operations supported by volumes are also supported by snapshots (all except write-related operations). They can not be modified and play an essential role in copy-on-write scenarios. """ ) Api.snapshotsList = GET('SnapshotsList', returns=[sp.SnapshotSummary]).doc("List all snapshots", """ List all the snapshots in the cluster in the same format as VolumeList. """ ) Api.snapshotsSpace = GET('SnapshotsSpace', returns=[sp.SnapshotSpace]).doc("List snapshots space estimations", """ List estimated virtual space used by each snapshot. """ ) Api.snapshotList = GET('Snapshot/{snapshotName}', SnapshotName, returns=[sp.SnapshotSummary]).doc("List a single snapshot", """ Same as SnapshotList but only return information about a given snapshot. """ ) Api.snapshotDescribe = GET('SnapshotDescribe/{snapshotName}', SnapshotName, returns=sp.Snapshot).doc("Describe a snapshot", """ Return detailed information about the distribution of the snapshot's data on the disks. """ ) Api.snapshotInfo = GET('SnapshotGetInfo/{snapshotName}', SnapshotName, returns=sp.SnapshotInfo).doc("Get snapshot info", """ Return general information about the distribution of the snapshot's data on the disks. """ ) Api.snapshotCreate = POST('VolumeSnapshot/{volumeName}', VolumeName, json=sp.VolumeSnapshotDesc, returns=ApiOkVolumeCreate).doc("Snapshot a volume", """ Create a snapshot of the given volume; the snapshot becomes the parent of the volume. """ ) Api.snapshotUpdate = POST('SnapshotUpdate/{snapshotName}', SnapshotName, json=sp.SnapshotUpdateDesc).doc("Update a snapshot", """ Alter the configuration of an existing snapshot. """ ) Api.snapshotRebase = POST('SnapshotRebase/{snapshotName}', SnapshotName, json=sp.VolumeRebaseDesc).doc("Rebase a snapshot", """ Change the parent of the snapshot by choosing from the ones higher in the hierarchy or by rebasing it to no parent. """ ) Api.snapshotAbandonDisk = POST('VolumeAbandonDisk/{snapshotName}', SnapshotName, json=sp.AbandonDiskDesc).doc("Abandon disk", """ """ ) Api.snapshotDelete = POST('SnapshotDelete/{snapshotName}', SnapshotName).doc("Delete a snapshot", """ """) Api.spDocSection("Attachments", """""") Api.attachmentsList = GET('AttachmentsList', returns=[sp.AttachmentDesc]).doc("List all attachments", """ List the volumes and snapshots currently attached to clients along with the read/write rights of each attachment. """ ) Api.volumesReassign = POST('VolumesReassign', json=[either(sp.VolumeReassignDesc, sp.SnapshotReassignDesc)]).doc("Reassign volumes and/or snapshots", """ Perform bulk attach/detach and attachment rights modification. """ ) Api.spDocSection("Placement Groups", """ Placement groups provide a way to specify the disks on which a volume's data should be stored. """ ) Api.placementGroupsList = GET('PlacementGroupsList', returns={sp.PlacementGroupName: sp.PlacementGroup}).doc("List all placement groups", """ """) Api.placementGroupDescribe = GET('PlacementGroupDescribe/{placementGroupName}', PlacementGroupName, returns=sp.PlacementGroup).doc("Describe a single placement group", """ Same as PlacementGroupsList but only return information about a given group. """ ) Api.placementGroupUpdate = POST('PlacementGroupUpdate/{placementGroupName}', PlacementGroupName, json=sp.PlacementGroupUpdateDesc).doc("Create and/or update a placement group", """ If a group by the specified name does not exist, it will be created. """ ) Api.placementGroupDelete = POST('PlacementGroupDelete/{placementGroupName}', PlacementGroupName).doc("Delete a placement group", """ """) Api.spDocSection("Volume Templates", """ Templates are a set of rules used for creating many similar volumes. """ ) Api.volumeTemplatesList = GET('VolumeTemplatesList', returns=[sp.VolumeTemplateDesc]).doc("List all volume templates", """ """) Api.volumeTemplatesStatus = GET('VolumeTemplatesStatus', returns=[sp.VolumeTemplateStatusDesc]).doc("List the status of all volume templates", """ """) Api.volumeTemplateDescribe = GET('VolumeTemplateDescribe/{templateName}', VolumeTemplateName, returns=sp.VolumeTemplateDesc).doc("Describe a single volume template", """ Same as VolumeTemplatesList but only return information about a given template. """) Api.volumeTemplateCreate = POST('VolumeTemplateCreate', json=sp.VolumeTemplateCreateDesc).doc("Create a volume template", """ """) Api.volumeTemplateUpdate = POST('VolumeTemplateUpdate/{templateName}', VolumeTemplateName, json=sp.VolumeTemplateUpdateDesc).doc("Update a volume template", """ Alter the configuration of an existing volume template. """ ) Api.volumeTemplateDelete = POST('VolumeTemplateDelete/{templateName}', VolumeTemplateName).doc("Delete a volume template", """ """) Api.spDocSection("Volume Relocator", """ This is a service that moves data when needed, e.g. when removing or adding disks. """ ) Api.volumeRelocatorOn = POST('VolumeRelocatorOn').doc("Turn the relocator on", """ """) Api.volumeRelocatorOff = POST('VolumeRelocatorOff').doc("Turn the relocator off", """ """) Api.volumeRelocatorStatus = GET('VolumeRelocatorStatus', returns=sp.VolumeRelocatorStatus).doc("Get the relocator's status", """ """) Api.volumeRelocatorDisks = GET('VolumeRelocatorDisksList', returns={sp.DiskId: sp.DiskTarget}).doc("List total per disk relocation estimates", """ """ ) Api.volumeRelocatorVolumeDisks = GET('VolumeRelocatorVolumeDisks/{volumeName}', VolumeName, returns={sp.DiskId: sp.DiskTarget}).doc("List per disk relocation estimates for a given volume", """ """) Api.volumeRelocatorSnapshotDisks = GET('VolumeRelocatorSnapshotDisks/{snapshotName}', SnapshotName, returns={sp.DiskId: sp.DiskTarget}).doc("List per disk relocation estimates for a given snapshot", """ """) Api.spDocSection("Balancer", """ This is a service that decides when it is a good time to move data. """ ) Api.volumeBalancerGetStatus = GET('VolumeBalancerStatus', returns=sp.VolumeBalancerStatus).doc("Get the balancer's status", """ """) Api.volumeBalancerSetStatus = POST('VolumeBalancerStatus', json=sp.VolumeBalancerCommand).doc("Set the balancer's status", """ """) Api.volumeBalancerVolumesStatus = GET('VolumeBalancerVolumesStatus', returns=[sp.VolumaBalancerVolumeStatus]).doc("List balancer volume and snapshot status", """ Show which volumes and snapshots will be reallocated by the current balancer run. """ ) Api.volumeBalancerDisks = GET('VolumeBalancerDisksList', returns={sp.DiskId: sp.DiskTarget}).doc("List total per disk rebalancing estimates", """ """) Api.volumeBalancerVolumeDisks = GET('VolumeBalancerVolumeDisks/{volumeName}', VolumeName, returns={sp.DiskId: sp.DiskTarget}).doc("List per disk rebalancing estimated for a given volume", """ """) Api.volumeBalancerSnapshotDisks = GET('VolumeBalancerSnapshotDisks/{snapshotName}', SnapshotName, returns={sp.DiskId: sp.DiskTarget}).doc("List per disk rebalancing estimates for a given snapshot", """ """) Api.volumeBalancerVolumeDiskSets = GET('VolumeBalancerVolumeDiskSets/{volumeName}', VolumeName, returns=sp.VolumeBalancerVolumeDiskSets).doc("Get the disk sets computed by the balancer for a given volume", """ """) Api.volumeBalancerSnapshotDiskSets = GET('VolumeBalancerSnapshotDiskSets/{snapshotName}', SnapshotName, returns=sp.VolumeBalancerVolumeDiskSets).doc("Get the disk sets computed by the balancer for a given snapshot", """ """) Api.volumeBalancerGroups = GET('VolumeBalancerGroups', returns=[sp.VolumeBalancerAllocationGroup]).doc("List balancer allocation groups", """ """) PK°‰ýF–ªIstorpool/spjson.py# #- # Copyright (c) 2014, 2015 StorPool. # All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # try: import simplejson as js except ImportError: from sys import stderr print >> stderr, 'simplejson unavailable, fall-back to standart python json' import json as js from collections import defaultdict import sputils as sp import spdoc as dc sort_keys = False indent = None separators = (',', ':') load = js.load loads = js.loads dump = lambda obj, fp: js.dump(obj, fp, cls=JsonEncoder, sort_keys=sort_keys, indent=indent, separators=separators) dumps = lambda obj: js.dumps(obj, cls=JsonEncoder, sort_keys=sort_keys, indent=indent, separators=separators) class JsonEncoder(js.JSONEncoder): def default(self, o): if isinstance(o, JsonObjectImpl): return o.toJson() elif isinstance(o, set): return list(o) else: return super(JsonEncoder, self).default(o) class JsonObjectImpl(object): def __new__(cls, json={}, **kwargs): if isinstance(json, cls): assert not kwargs, "Unsupported update on already contructed object" return json else: j = dict(json) j.update(kwargs) self = super(JsonObjectImpl, cls).__new__(cls) object.__setattr__(self, '__jsonAttrs__', {}) for attr, attrDef in self.__jsonAttrDefs__.iteritems(): self.__jsonAttrs__[attr] = attrDef.handleVal(j[attr]) if attr in j else attrDef.defaultVal() return self def __getattr__(self, attr): return self.__jsonAttrs__[attr] def __setattr__(self, attr, value): if attr not in self.__jsonAttrDefs__: error = "'{cls}' has no attribute '{attr}'".format(cls=self.__class__.__name__, attr=attr) raise AttributeError(error) self.__jsonAttrs__[attr] = self.__jsonAttrDefs__[attr].handleVal(value) def toJson(self): return dict((attr, getattr(self, attr)) for attr in self.__jsonAttrDefs__) def __iter__(self): return self.toJson().iteritems() _asdict = toJson __str__ = __repr__ = lambda self: str(self.toJson()) class JsonObject(object): def __init__(self, **kwargs): self.attrDefs = dict((argName, sp.spType(argVal)) for argName, argVal in kwargs.iteritems()) def __call__(self, cls): if issubclass(cls, JsonObjectImpl): attrDefs = dict(cls.__jsonAttrDefs__) attrDefs.update(self.attrDefs) docDescs = defaultdict(lambda: "", dict((attrName, attrDesc) for attrName, (attrType, attrDesc) in cls.spDoc.attrs.iteritems())) else: attrDefs = self.attrDefs docDescs = defaultdict(lambda: "") doc = "" if cls.__doc__ is not None: doc += cls.__doc__ else: doc += "{0}.{1}".format(cls.__module__, cls.__name__) doc += "\n\n" doc += " JSON attributes:\n" for attrName, attrType in sorted(attrDefs.iteritems()): doc += " {name}: {type}\n".format(name=attrName, type=attrType.name) doc += "\n" if cls.__doc__ is not None: docDescs.update((k.strip(), v.strip()) for k, v in (m for m in (line.split(':') for line in cls.__doc__.split('\n')) if len(m) == 2)) spDoc = dc.JsonObjectDoc(cls.__name__, cls.__doc__ or "XXX {0}.{1} not documented.".format(cls.__module__, cls.__name__), dict( (attrName, (attrType.spDoc, docDescs[attrName])) for attrName, attrType in attrDefs.iteritems() )) return type(cls.__name__, (cls, JsonObjectImpl), dict(__jsonAttrDefs__=attrDefs, __module__=cls.__module__, __doc__=doc, spDoc=spDoc)) PK°‰ýFÐtFIìƒìƒstorpool/sptypes.py# #- # Copyright (c) 2014, 2015 StorPool. # All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import re from sputils import error, spTypeFun, maybe, const, either, eitherOr, internal from spjson import JsonObject, dumps ### Simple validator functions ### def regex(argName, regex): _regex = re.compile(regex) def validator(string): if string is None: error('No {argName} specified', argName=argName) try: string = str(string) if not _regex.match(string): error('Invalid {argName} "{argVal}". Must match {regex}', argName=argName, argVal=string, regex=regex) return string except ValueError: error('Invalid {argName}. Must be string', argName=argName) return spTypeFun(argName, validator, '''string, regex {regex}'''.format(regex=regex)) def oneOf(argName, *accepted): accepted = list(accepted) _accepted = frozenset(accepted) def validator(value): if value not in _accepted: error("Invalid {argName}: {value}. Must be one of {accepted}", argName=argName, value=value, accepted=accepted) else: return value return spTypeFun(argName, validator, '''One of {{{accepted}}}'''.format(accepted=", ".join(map(dumps, accepted)))) def intRange(argName, min, max): def validator(i): try: i = int(i) if i < min or i > max: error('Invalid {argName}. Must be between {min} and {max}', argName=argName, min=min, max=max) return i except ValueError: error('Invalid {argName}. Must be an integer', argName=argName) return spTypeFun(argName, validator, '''integer, {min} <= value <= {max}'''.format(min=min, max=max)) def namedEnum(argName, names, first=0): names = list(names) end = first + len(names) for name in names: assert name not in globals(), "{name} = {val} already defined in globals()".format(name=name, val=globals()[name]) globals()[name] = name def validator(val): try: val = int(val) if val < first or val >= end: error("Invalid {argName} value {val}. Must be between {first} and {last}", argName=argName, val=val, first=first, last=end - 1) return names[val - first] except ValueError: error("Invalid {argName}. Must be an integer") return spTypeFun(argName, validator, '''{argName}, enumeration from {first} to {last}'''.format(argName=argName, first=first, last=end-1)) def unlimitedInt(argName, min, unlimited): def validator(val): if val is None: error('No {argName} specified', argName=argName) elif val == unlimited: return val try: val = int(val) if val < min: error('Ivalid {argName}. Must be at least {min}', argName=argName, min=min) else: return val except ValueError: error('Non-numeric {argName}: {value}', argName=argName, value=val) return spTypeFun(argName, validator, '''a positive integer or '{unlimited}' for unlimited'''.format(unlimited=unlimited)) def nameValidator(argName, regex, size, *blacklisted): _regex = re.compile(regex) blacklisted = list(blacklisted) _blacklisted = frozenset(blacklisted) def validator(name): if name is None: error('No {argName} specified', argName=argName) try: name = str(name) if not _regex.match(name): error('Invalid {argName} "{argVal}". Must match {regex}', argName=argName, argVal=name, regex=regex) elif name in _blacklisted: error('{argName} must not be in {blacklisted}', argName=argName, blacklisted=blacklisted) elif len(name) >= size: error('{argName} is too long. Max allowed is {max}', argName=argName, max=size-1) else: return name except ValueError: error('Invalid {argName}. Must be a string', argName=argName) return spTypeFun(argName, validator, '''a string({size}), matching {regex}, except {{{blacklisted}}}'''.format(size=size, regex=regex, blacklisted=", ".join(map(str, blacklisted)))) def volumeSizeValidator(argName): def validator(size): try: size = int(size) if size < 1: error('Invalid {argName} {size}. Must be positive', argName=argName, size=size) elif size % SECTOR_SIZE: error('Invalid {argName} {size}. Must be a multiple of {sectorSize}', argName=argName, size=size, sectorSize=SECTOR_SIZE) else: return size except ValueError: error('Non-numeric {argName}: {size}', argName=argName, size=size) return spTypeFun(argName, validator, '''a positive integer divisible by {sectorSize}'''.format(sectorSize=SECTOR_SIZE)) ### Common constants ### VOLUME_NAME_SIZE = 200 PLACEMENT_GROUP_NAME_SIZE = 128 VOLUME_NAME_REGEX = r'^\#?[A-Za-z0-9_\-.:]+$' SNAPSHOT_NAME_REGEX = r'^\*?[A-Za-z0-9_\-.:@]+$' PLACEMENT_GROUP_NAME_REGEX = r'^[A-Za-z0-9_\-]+$' VOLUME_TEMPLATE_NAME_REGEX = r'^[A-Za-z0-9_\-]+$' DISK_DESC_REGEX = r'^[A-Za-z0-9_\- ]{,30}$' SECTOR_SIZE = 512 MAX_CHAIN_LENGTH = 6 MAX_CLIENT_DISKS = 1024 MAX_CLIENT_DISK = MAX_CLIENT_DISKS - 1 MAX_CLUSTER_DISKS = 4096 MAX_DISK_ID = MAX_CLUSTER_DISKS - 1 MAX_NET_ID = 3 MAX_NODE_ID = 63 MAX_PEER_ID = 0xffff PEER_TYPE_CLIENT = 0x8000 PEER_SUBTYPE_CLIENT_AOE = 0xe000 PEER_SUBTYPE_MGMT = 0xf000 MAX_SERVER_ID = PEER_TYPE_CLIENT - 1 MAX_CLIENT_ID = PEER_SUBTYPE_CLIENT_AOE - PEER_TYPE_CLIENT - 1 MAX_AOE_TARGET_ID = PEER_SUBTYPE_MGMT - PEER_SUBTYPE_CLIENT_AOE - 1 MAX_MGMT_ID = MAX_PEER_ID - PEER_SUBTYPE_MGMT ### Simple type validators ### MacAddr = regex('MAC Address', r'^([0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') PeerStatus = oneOf('PeerStatus', 'up', 'down') ClientStatus = oneOf('ClientStatus', 'running', 'down') ServerStatus = oneOf('ServerStatus', 'running', 'waiting', 'booting', 'down') ClusterStatus = oneOf('ClusterStatus', 'running', 'waiting', 'down') NetId = intRange('NetID', 0, MAX_NET_ID) NodeId = intRange('NodeID', 0, MAX_NODE_ID) PeerId = intRange('PeerID', 0, MAX_PEER_ID) ClientId = intRange('ClientID', 1, MAX_CLIENT_ID) ServerId = intRange('ServerID', 1, MAX_SERVER_ID) MgmtId = intRange('MgmtID', 1, MAX_MGMT_ID) AoeTargetId = intRange("AoeTargetID", 1, MAX_AOE_TARGET_ID) DiskId = intRange('DiskID', 0, MAX_DISK_ID) DiskDescription = regex('DiskDescritpion', DISK_DESC_REGEX) SnapshotName = nameValidator("SnapshotName", SNAPSHOT_NAME_REGEX, VOLUME_NAME_SIZE, 'list', 'status') VolumeName = nameValidator("VolumeName", VOLUME_NAME_REGEX, VOLUME_NAME_SIZE, 'list', 'status') VolumeReplication = intRange('Replication', 1, 3) VolumeSize = volumeSizeValidator("Size") VolumeResize = volumeSizeValidator("SizeAdd") PlacementGroupName = nameValidator("PlacementGroupName", PLACEMENT_GROUP_NAME_REGEX, PLACEMENT_GROUP_NAME_SIZE, 'list') VolumeTemplateName = nameValidator("VolumeTemplateName", VOLUME_TEMPLATE_NAME_REGEX, VOLUME_NAME_SIZE, 'list') Bandwidth = unlimitedInt('Bandwidth', 0, '-') IOPS = unlimitedInt('IOPS', 0, '-') AttachmentRights = oneOf('AttachmentRights', 'rw', 'ro') ObjectState = namedEnum("ObjectState", "OBJECT_UNDEF OBJECT_OK OBJECT_OUTDATED OBJECT_IN_RECOVERY OBJECT_WAITING_FOR_VERSION OBJECT_WAITING_FOR_DISK OBJECT_DATA_NOT_PRESENT OBJECT_DATA_LOST OBJECT_WAINING_FOR_CHAIN OBJECT_WAIT_IDLE".split(' ')) ### NETWORK ### @JsonObject(mac=MacAddr) class NetDesc(object): pass @JsonObject(networks={NetId: NetDesc}) class PeerDesc(object): ''' networks: List of the networks that StorPool communicates through on this node. ''' ### SERVER ### @JsonObject(nodeId=NodeId, version=str) class Service(object): ''' nodeId: The ID of the node on which the service is running. version: The version of the running StorPool service. ''' @property def running(self): return self.status == 'running' @JsonObject(id=ServerId, status=ServerStatus, missingDisks=[DiskId], pendingDisks=[DiskId]) class Server(Service): ''' id: The ID of the service. Currently this is the same as the ID of the node. status: down - There is no storpool_server daemon running or it is still recovering its drives from a crashed state. waiting - storpool_server is running but waiting for some disks to appear to prevent split-brain situations. booting - No missing disks; the server is in the process of joining the cluster ... missingDisks: The cluster will remain down until these disks are seen again. This happens in the case of simultaneous failure of the whole cluster (power failure); the servers keep track of where the most recent configuration and data was stored. pendingDisks: Similar to missingDisks, these are the disks that are ready and waiting for the missing ones. ''' @JsonObject(id=ClientId, status=ClientStatus) class Client(Service): ''' id: The ID of the service. Currently this is the same as the ID of the node. status: The current status of the client. ''' @JsonObject(id=MgmtId, status=ClientStatus, prio=internal(int), active=bool) class Mgmt(Service): ''' id: The ID of the service. status: The current status of the mgmt instance. active: If the instance is currently active. For a given cluster one mgmt instance will be active at any given time. ''' @JsonObject(id=AoeTargetId, status=ClientStatus) class AoeTarget(Service): ''' id: The ID of the service. status: The current status of the AoE target. ''' @JsonObject(clusterStatus=ClusterStatus, mgmt={MgmtId: Mgmt}, clients={ClientId: Client}, servers={ServerId: Server}, aoeTargets={AoeTargetId: AoeTarget}) class ClusterStatus(object): ''' clusterStatus: The current status of the whole cluster. running - At least one running server; a cluster is formed. waiting - In quorum but negotiations between servers are not over yet. down - No quorum; most likely because more beacons are needed. ''' pass ### CLIENT ### @JsonObject(id=ClientId, generation=long, clientGeneration=long, configStatus=oneOf("client status", 'ok', 'updating', 'down'), delay=int) class ClientConfigStatus(object): ''' generation: The cluster generation based on the number of configuration changes since the cluster was created. clientGeneration: The generation of the specific client. configStatus: Whether there is an update of the configuration in progress. delay: The time it took for the client generation to reach the cluster generation. Only applicable to ClientConfigWait. Always 0 in ClientsConfigDump. ''' ### AOE ### AoeExportStatus = oneOf('AoeExportStatus', "OK", "down") @JsonObject(name=either(VolumeName, SnapshotName), snapshot=bool, aoeId=str, target=eitherOr(AoeTargetId, None), status=AoeExportStatus) class AoeExport(object): ''' A single StorPool volume or snapshot exported over AoE. name: The name of the StorPool volume. snapshot: True if this entry describes a snapshot instead of a volume. aoeId: The AoE identifier that the volume is exported as. target: The StorPool node that serves as an AoE target to export this volume. status: The status of the StorPool AoE target node if target is set. ''' ### TASK ### @JsonObject(diskId=DiskId, transactionId=long, allObjects=int, completedObjects=int, dispatchedObjects=int, unresolvedObjects=internal(int)) class Task(object): ''' diskId: The disk ID this task is on. transactionId: An ID associated with the currently running task. This ID is the same for all the tasks running on different disks but initiated by the same action (e.g. when reallocating a volume, all tasks associated with that volume will have the same ID). allObjects: The number of all the objects that the task is performing actions on. completedObjects: The number of objects that the task has finished working on. dispatchedObjects: Objects that the task has started working on. ''' ### DISK ### @JsonObject(objectId=internal(int), generation=long, version=long, volume=str, parentVolume=str, onDiskSize=int, storedSize=int, state=ObjectState, volumeId=internal(long)) class DiskObject(object): ''' parentVolume: The name of the parent snapshot. generation: The generation when the last write to this object occurred. onDiskSize: The space allocated on the disk for the object. This can go up to 32MB. storedSize: The size of the actual data in that object (<= onDiskSize). volume: The name of the volume for which the object contains data. version: With each write the version is increased. ''' @property def ok(self): return self.state == OBJECT_OK @JsonObject(name=str, storedSize=long, onDiskSize=long, objectsCount=long, objectStates={ObjectState:int}) class DiskVolumeInfo(object): ''' objectsCount: The number of objects of the volume stored on this disk. objectStates: For each state, the number of objects that are in that state. 0-undefined 1-ok 2-outdated 3-in_recovery 4-waiting_for_version 5-waiting_for_disk 6-data_not_present 7-data_lost 8-waiting_for_chain 9-wait_idle onDiskSize: The space allocated on the disk for the object. This can go up to 32MB. storedSize: The size of the actual data in that object (<= onDiskSize). ''' @JsonObject(pages=int, pagesPending=int, maxPages=int, iops=int, bandwidth=eitherOr(int, None)) class DiskWbcStats(object): pass @JsonObject(entries=int, space=int, total=int) class DiskAggregateScores(object): pass @JsonObject(id=DiskId, serverId=ServerId, generationLeft=long, model=str, serial=str, description=DiskDescription, softEject=oneOf('DiskSoftEjectStatus', "on", "off", "paused")) class DiskSummaryBase(object): ''' id: The ID of this disk. It is set when the disk is formatted to work with StorPool. serverId: The ID of the server this disk is currently on. In case the disk is currently down, the last known server ID is reported. generationLeft: The last cluster generation when the disk was active on a running server, or -1 if the disk is currently active. softEject: The status of the soft-eject process. description: A user-defined description of the disk for easier identification of the device. ''' class DownDiskSummary(DiskSummaryBase): up = False @JsonObject(generationLeft=const(-1L), sectorsCount=long, empty=bool, ssd=bool, noFua=bool, isWbc=bool, device=str, agCount=internal(int), agAllocated=internal(int), agFree=internal(int), agFull=internal(int), agPartial=internal(int), agFreeing=internal(int), agMaxSizeFull=internal(int), agMaxSizePartial=internal(int), entriesCount=int, entriesAllocated=int, entriesFree=int, objectsCount=int, objectsAllocated=int, objectsFree=int, objectsOnDiskSize=long, wbc=internal(eitherOr(DiskWbcStats, None)), aggregateScore=internal(DiskAggregateScores)) class UpDiskSummary(DiskSummaryBase): ''' sectorsCount: The amount of 512-byte sectors on the disk. ssd: Whether the device is an SSD. noFua: Whether to issue FUA writes to this device. isWbc: Whether write-back cache is enabled for this device. device: The name of the physical disk device on the server. entriesAllocated: Used entries of the disk. objectsAllocated: Used objects of the disk. entriesFree: The remaining number of entries that can be stored on the disk. objectsFree: The remaining number of objects that can be stored on the disk. entriesCount: The maximum amount of entries that can exists on the disk. objectsCount: The maximum amount of object that can exists on the disk. empty: True if no volumes or snapshots are on this disk. objectsOnDiskSize: Total size occupied by objects. In essence, this is the estimated disk usage by StorPool. ''' up = True DiskSummary = either(UpDiskSummary, DownDiskSummary) @JsonObject(objectStates={ObjectState:int}, volumeInfos={str:DiskVolumeInfo}) class DiskInfo(UpDiskSummary): ''' For each state, the number of objects that are in that state. 0-undefined 1-ok 2-outdated 3-in_recovery 4-waiting_for_version 5-waiting_for_disk 6-data_not_present 7-data_lost 8-waiting_for_chain 9-wait_idle volumeInfos: Detailed information about the volumes that have data stored on the disk. ''' @JsonObject(objects={int:DiskObject}) class Disk(UpDiskSummary): ''' objects: Detailed information about each object on the disk. ''' @JsonObject(description=DiskDescription) class DiskDescUpdate(object): ''' description: A user-defined description of the disk for easier identification of the device. ''' ### ACTIVE REQUESTS ### @JsonObject(requestId=str, requestIdx=int, volume=either(VolumeName, SnapshotName), address=long, size=int, op=oneOf('RequestOp', "read", "write", "merge", "system", "entries flush", "#bad_state", "#bad_drOp"), state=internal(str), prevState=internal(str), drOp=internal(str), msecActive=int) class ActiveRequestDesc(object): ''' requestId: A unique request ID that may be matched between clients and disks. requestIdx: A temporary local request identifier for this request on this client or disk. address: The offset in bytes within the logical volume. size: The size of the request in bytes. op: The type of the requested operation; one of read, write, system, merge, entries flush, #bad_state, #bad_drOp state: An internal attribute used only for debugging. We strongly recommend that you do not use this attribute in any kind of automation. prevState: An internal attribute used only for debugging. We strongly recommend that you do not use this attribute in any kind of automation. drOp: An internal attribute used only for debugging. We strongly recommend that you do not use this attribute in any kind of automation. msecActive: Time in microseconds since the request was submitted. ''' @JsonObject(clientId=ClientId, requests=[ActiveRequestDesc]) class ClientActiveRequests(object): ''' requests: A detailed listing of all the requests associated with the given client. ''' @JsonObject(diskId=DiskId, requests=[ActiveRequestDesc]) class DiskActiveRequests(object): ''' requests: A detailed listing of all the requests associated with the given disk. ''' @JsonObject(aoeTargetId=AoeTargetId, requests=[ActiveRequestDesc]) class AoeTargetActiveRequests(object): ''' requests: A detailed listing of all the requests associated with the given AoE target. ''' ### PLACEMENT GROUP ### @JsonObject(id=internal(int), name=PlacementGroupName, disks=set([DiskId]), servers=set([ServerId])) class PlacementGroup(object): ''' disks: IDs of the participating disks. servers: IDs of the participating servers. ''' @JsonObject(rename=maybe(PlacementGroupName), addServers=set([ServerId]), addDisks=set([DiskId]), rmServers=set([ServerId]), rmDisks=set([DiskId])) class PlacementGroupUpdateDesc(object): ''' rename: The new name of the placement group. addServers: IDs of the servers to add to this group. (This will add all the accessible disks of these servers) addDisks: IDs of the disks to add to this group. rmServers: IDs of the servers to be removed from this group. rmDisks: IDs of the disks to be removed from this group. ''' ### VOLUME and SNAPSHOT ### @JsonObject(bw=Bandwidth, iops=IOPS) class VolumeLimits(object): ''' bw: Bandwidth limit in KB. iops: iops limit. ''' @JsonObject(id=internal(long), parentName=eitherOr(SnapshotName, ""), templateName=eitherOr(VolumeTemplateName, ""), size=VolumeSize, replication=VolumeReplication, placeAll=PlacementGroupName, placeTail=PlacementGroupName, parentVolumeId=internal(long), originalParentVolumeId=internal(long), visibleVolumeId=long, templateId=internal(long), objectsCount=int, creationTimestamp=long, flags=internal(int)) class VolumeSummaryBase(VolumeLimits): ''' parentName: The volume/snapshot's parent snapshot. templateName: The template that the volume/snapshot's settings are taken from. size: The volume/snapshots's size in bytes. replication: The number of copies/replicas kept. placeAll: The name of a placement group which describes the disks to be used for all but the last replica. placeTail: The name of a placement group which describes the disks to be used for the last replica, the one used for reading. parentVolumeId: The ID of the parent snapshot. visibleVolumeId: The ID by which the volume/snapshot was created. objectsCount: The number of objects that the volume/snapshot is comprised of. ''' @JsonObject(name=VolumeName) class VolumeSummary(VolumeSummaryBase): ''' name: The name of this volume. ''' @JsonObject(name=SnapshotName, onVolume=VolumeName, autoName=bool, bound=bool, deleted=bool, transient=bool) class SnapshotSummary(VolumeSummaryBase): ''' name: The name of this snapshot onVolume: The name of the volume that this is a parent of. autoName: Is this snapshot anonymous. bound: Is this a bound snapshot. Bound snapshots are garbage-collected as soon as they remain childless and are no longer attached. deleted: Is this snapshot currently being deleted. transient: Is this a transient snapshot. Transient snapshots are internally created when cloning a volume. They cannot be attached as they may be garbage-collected at any time. ''' @JsonObject(storedSize=long, spaceUsed=long) class SnapshotSpace(SnapshotSummary): ''' storedSize: The number of bytes of client data on this snapshot. This does not take into account the StorPool replication and overhead, thus it is never larger than the volume size. spaceUsed: The number of bytes of client data that will be freed if this snapshot is deleted. ''' @JsonObject(disks=[DiskId], count=int) class VolumeChainStat(object): ''' disks: IDs of the disks. count: The number of objects on the disks. ''' @JsonObject(disksCount=int, objectsPerDisk={DiskId:int}, objectsPerChain=[VolumeChainStat], objectsPerDiskSet=[VolumeChainStat]) class VolumeInfo(VolumeSummary): pass @JsonObject(disksCount=int, objectsPerDisk={DiskId:int}, objectsPerChain=[VolumeChainStat], objectsPerDiskSet=[VolumeChainStat]) class SnapshotInfo(SnapshotSummary): pass @JsonObject(name=either(VolumeName, SnapshotName), size=VolumeSize, replication=VolumeReplication, status=oneOf("VolumeCurentStatus", "up", "up soon", "data lost", "down"), snapshot=bool, migrating=bool, decreasedRedundancy=bool, balancerBlocked=bool, storedSize=int, onDiskSize=int, syncingDataBytes=int, syncingMetaObjects=int, downBytes=int, downDrives=[DiskId], missingDrives=[DiskId], missingTargetDrives=[DiskId], softEjectingDrives=[DiskId]) class VolumeStatus(object): ''' size: The volume's size in bytes. replication: The number of copies/replicas kept. status: up - The volume is operational. up soon - Synchronizing versions of objects after a disk has come back up. data lost - The last copy of some of the data in the volume has been lost. down - Some or all of the objects of the volume are missing and the volume is not in a state to continue serving operations. snapshot: True if this response describes a snapshot instead of a volume. migrating: True if there are tasks for reallocation of the volume. decreasedRedundancy: True if any of the replicas of the volume are missing. storedSize: The number of bytes of client data on the volume. This does not take into account the StorPool replication and overhead, thus it is never larger than the volume size. onDiskSize: The actual size that the objects of this volume occupy on the disks. syncingDataBytes: The total number of bytes in objects currently being synchronized (degraded objects or objects with not yet known version) syncingMetaObjects: The number of objects currently being synchronized (degraded objects or objects with not yet known version) downBytes: The number of bytes of the volume that are not accessible at the moment. downDrives: The IDs of the drives that are not accessible at the moment but needed by this volume. The volume will be in the 'down' status until all or some of these drives reappear. missingDrives: The IDs of the drives that are not accessible at the moment. The volume has all the needed data on the rest of the disks and can continue serving requests but it is in the 'degraded' status. ''' @JsonObject(targetDiskSets=[[DiskId]], objects=[[DiskId]]) class Snapshot(SnapshotSummary): ''' targetDiskSets: Sets of disks that the volume's data should be stored on. objects: Where each object is actually stored. ''' @JsonObject(targetDiskSets=[[DiskId]], objects=[[DiskId]]) class Volume(VolumeSummary): ''' targetDiskSets: Sets of disks that the volume's data should be stored on. objects: Where each object is actually stored. ''' @JsonObject(placeAll=maybe(PlacementGroupName), placeTail=maybe(PlacementGroupName), replication=maybe(VolumeReplication), bw=maybe(Bandwidth), iops=maybe(IOPS)) class VolumePolicyDesc(object): ''' placeAll: The name of a placement group which describes the disks to be used for all but the last replica. placeTail: The name of a placement group which describes the disks to be used for the last replica, the one used for reading. bw: Bandwidth limit in KB. iops: iops limit. replication: The number of copies/replicas kept. ''' @JsonObject(name=VolumeName, size=maybe(VolumeSize), parent=maybe(SnapshotName), template=maybe(VolumeTemplateName), baseOn=maybe(VolumeName)) class VolumeCreateDesc(VolumePolicyDesc): ''' name: The name of the volume to be created. size: The volume's size in bytes. parent: The name of the snapshot that the new volume is based on. template: The name of the template that the settings of the new volume are based on. baseOn: The name of an already existing volume that the new volume is to be a copy of. ''' @JsonObject(rename=maybe(VolumeName), size=maybe(VolumeSize), sizeAdd=maybe(VolumeResize), template=maybe(VolumeTemplateName), shrinkOk=maybe(bool)) class VolumeUpdateDesc(VolumePolicyDesc): ''' rename: The new name to be set. size: The new size in bytes. sizeAdd: The number of bytes that the volume's size should be increased by. template: The new template that the volume's settings should be based on. ''' @JsonObject(name=maybe(VolumeName), bind=maybe(bool)) class VolumeSnapshotDesc(object): ''' name: The name of the newly created snapshot. If not specified, a name will be auto-generated by the StorPool management service. bind: If true, the lifetime of the newly created snapshot will be bound to the lifetime of its children. As soon as it remains childless the snapshot will be garbage-collected. ''' @JsonObject(rename=maybe(VolumeName), bind=maybe(bool)) class SnapshotUpdateDesc(VolumePolicyDesc): ''' rename: The new name to be set. bind: When true bind this snapshot, when false - unbind it. If not set or missing - no change. ''' @JsonObject(parentName=maybe(SnapshotName)) class VolumeRebaseDesc(object): ''' parentName: The name of one of the volume's parents on which to re-base. If left out, it will be re-based to base. ''' @JsonObject(diskId=DiskId) class AbandonDiskDesc(object): ''' diskId: the disk to abandon. ''' ### VOLUME RIGHTS ### DetachClientsList = eitherOr([ClientId], "all") AttachmentPos = intRange('AttachmentPos', 0, MAX_CLIENT_DISK) @JsonObject(volume=VolumeName, detach=maybe(DetachClientsList), ro=maybe([ClientId]), rw=maybe([ClientId]), force=False) class VolumeReassignDesc(object): ''' volume: The name of the volume to be reassigned. detach: The clients from which to detach the given volume. ro: The clients on which to attach the volume as read only. rw: The clients on which to attach the volume as read/write. force: Whether to force detaching of open volumes. ''' @JsonObject(snapshot=SnapshotName, detach=maybe(DetachClientsList), ro=maybe([ClientId]), force=False) class SnapshotReassignDesc(object): ''' snapshot: The name of the snapshot which should be reassigned. detach: The clients from which to detach the given snapshot. ro: The clients on which to attach the snapshot. force: Whether to force detaching of open snapshots. ''' @JsonObject(volume=VolumeName, snapshot=bool, client=ClientId, rights=AttachmentRights, pos=AttachmentPos) class AttachmentDesc(object): ''' snapshot: Whether it is a snapshot or a volume. client: The ID of the client on which it is attached. volume: The name of the attached volume. rights: Whether the volume is attached as read only or read/write; always ro for snapshots. pos: The attachment position on the client; used by the StorPool client to form the name of the internal /dev/spN device node. ''' ### VOLUME TEMPLATES ### @JsonObject(id=internal(int), name=VolumeTemplateName, parentName=eitherOr(SnapshotName, ""), placeAll=PlacementGroupName, placeTail=PlacementGroupName, size=eitherOr(VolumeSize, "-"), replication=eitherOr(VolumeReplication, "-")) class VolumeTemplateDesc(VolumeLimits): ''' name: The name of the template. parentName: The name of the snapshot on which volumes will be based. placeAll: The name of a placement group which describes the disks to be used for all but the last replica. placeTail: The name of a placement group which describes the disks to be used for the last replica, the one used for reading. size: A default size for the volumes (in bytes). replication: A default number of copies to be kept by StorPool. ''' @JsonObject(id=internal(int), name=VolumeTemplateName, placeAll=PlacementGroupName, placeTail=PlacementGroupName, replication=eitherOr(VolumeReplication, "-"), volumesCount=int, snapshotsCount=int, removingSnapshotsCount=int, size=eitherOr(VolumeSize, 0), totalSize=eitherOr(VolumeSize, 0), onDiskSize=long, storedSize=long, availablePlaceAll=long, availablePlaceTail=long, capacityPlaceAll=long, capacityPlaceTail=long) class VolumeTemplateStatusDesc(object): ''' name: The name of the template. placeAll: The name of a placement group which describes the disks to be used for all but the last replica. placeTail: The name of a placement group which describes the disks to be used for the last replica, the one used for reading. replication: The number of copies to be kept by StorPool if defined for this template, otherwise "-". volumesCount: The number of volumes based on this template. snapshotsCount: The number of snapshots based on this template (incl. snapshots currently being deleted). removingSnapshotsCount: The number of snapshots based on this template currently being deleted. size: The number of bytes of all volumes based on this template, not counting the StorPool replication and checksums overhead. totalSize: The number of bytes of all volumes based on this template, including the StorPool replication overhead. storedSize: The number of bytes of client data on all the volumes based on this template. This does not take into account the StorPool replication and overhead, thus it is never larger than the volume size. onDiskSize: The actual on-disk number of bytes occupied by all the volumes based on this template. availablePlaceAll: An estimate of the available space on all the disks in this template's placeAll placement group. availablePlaceTail: An estimate of the available space on all the disks in this template's placeTail placement group. capacityPlaceAll: An estimate of the total physical space on all the disks in this template's placeAll placement group. capacityPlaceTail: An estimate of the total physical space on all the disks in this template's placeTail placement group. ''' @JsonObject(name=VolumeTemplateName, parent=maybe(SnapshotName), size=maybe(VolumeSize)) class VolumeTemplateCreateDesc(VolumePolicyDesc): ''' parent: The name of the snapshot on which to base volumes created by this template. size: A default size for the volumes (in bytes). ''' @JsonObject(rename=maybe(VolumeTemplateName), parent=maybe(SnapshotName), size=maybe(VolumeSize), propagate=maybe(bool)) class VolumeTemplateUpdateDesc(VolumePolicyDesc): ''' rename: The new name of the template. parent: The name of the snapshot on which to base volumes created by this template. size: A default size for the volumes (in bytes). propagate: Whether to propagate this change to all the volumes and snapshots using this template. ''' ### VOLUME RELOCATOR and BALANCER ### @JsonObject(status=oneOf("RelocatorStatus", 'on', 'off', 'blocked')) class VolumeRelocatorStatus(object): pass @JsonObject(status=oneOf("BalancerStatus", 'nothing to do', 'blocked', 'waiting', 'working', 'ready', 'commiting'), auto=bool) class VolumeBalancerStatus(object): pass @JsonObject(cmd=oneOf("BalancerCommand", 'start', 'stop', 'commit', 'auto')) class VolumeBalancerCommand(object): pass @JsonObject(name=either(VolumeName, SnapshotName), placeAll=PlacementGroupName, placeTail=PlacementGroupName, replication=VolumeReplication, size=long, objectsCount=int, snapshot=bool, reallocated=bool, blocked=bool) class VolumaBalancerVolumeStatus(object): pass @JsonObject(currentDiskSets=[[DiskId]], balancerDiskSets=[[DiskId]]) class VolumeBalancerVolumeDiskSets(VolumaBalancerVolumeStatus): pass @JsonObject(current=int, target=int, delta=int, toRecover=int) class TargetDesc(object): pass @JsonObject(id=DiskId, serverId=ServerId, generationLeft=long) class DownDiskTarget(object): pass @JsonObject(id=DiskId, serverId=ServerId, generationLeft=const(-1L), objectsAllocated=TargetDesc, objectsCount=int, storedSize=TargetDesc, onDiskSize=TargetDesc) class UpDiskTarget(object): pass DiskTarget = either(UpDiskTarget, DownDiskTarget) @JsonObject(storedSize=int, objectsCount=int) class VolumeBalancerSlot(object): pass @JsonObject(placeAll=PlacementGroupName, placeTail=PlacementGroupName, replication=VolumeReplication, feasible=bool, blocked=bool, size=int, storedSize=int, objectsCount=int, root=either(VolumeName, SnapshotName), volumes=[either(VolumeName, SnapshotName)], targetDiskSets=[[DiskId]], slots=[VolumeBalancerSlot]) class VolumeBalancerAllocationGroup(object): pass PKR¼F"kŸÙÙstorpool/sputils.py# #- # Copyright (c) 2014, 2015 StorPool. # All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import gc import re from collections import Iterable, namedtuple from inspect import isfunction, isclass from os.path import exists, islink from subprocess import Popen, PIPE from time import sleep import spdoc as doc import spjson as js sec = 1.0 msec = 1.0e-3 * sec usec = 1e-6 * sec KB = 1024 MB = 1024 ** 2 GB = 1024 ** 3 TB = 1024 ** 4 def pr(x): print x return x def noop(*args, **kwargs): pass fTrue = lambda *args, **kwargs: True fFalse = lambda *args, **kwargs: False fNone = lambda *args, **kwargs: None idty = lambda x: x fst = lambda args: args[0] snd = lambda args: args[1] trd = lambda args: args[2] last = lambda args: args[-1] tail = lambda args: args[1:] swap = lambda (x, y): (y, x) roundUp = lambda n, k: ((n + k - 1) / k) * k to_iter = lambda x: x if isinstance(x, Iterable) and not isinstance(x, str) else (x,) lines = lambda args: '\n'.join(map(str, args)) def noGC(fun): ''' Disable garbage collection during the wrapped function ''' def wrapper(*args, **kwargs): try: gc.disable() return fun(*args, **kwargs) finally: gc.enable() gc.collect() return wrapper def pathPollWait(path, shouldExist, isLink, pollTime, maxTime): ''' poll/listen for path to appear/disappear ''' for i in xrange(int(maxTime / pollTime)): pathExists = exists(path) if pathExists and isLink: assert islink(devName) if pathExists == shouldExist: return True else: sleep(pollTime) else: return False class InvalidArgumentException(Exception): def __init__(self, fmt, **kwargs): super(InvalidArgumentException, self).__init__() self.__dict__.update(**kwargs) self.__str = fmt.format(**kwargs) def __str__(self): return self.__str def error(fmt, **kwargs): raise InvalidArgumentException(fmt, **kwargs) SpType = namedtuple('SpType', ['name', 'handleVal', 'defaultVal', 'spDoc']) def spList(lst): assert len(lst) == 1, "SpList :: [subType]" subType = spType(lst[0]) valT = subType.handleVal name = "[{0}]".format(subType.name) _doc = doc.ListDoc(name, "A list of {0}".format(subType.name), deps=[subType.spDoc]) return SpType(name, lambda xs: [valT(x) for x in xs], lambda: [], _doc) def spSet(st): assert len(st) == 1, "SpSet :: set([subType])" subType = spType(list(st)[0]) valT = subType.handleVal name = "{{{0}}}".format(subType.name) _doc = doc.ListDoc(name, "A set of {0}".format(subType.name), deps=[subType.spDoc]) return SpType(name, lambda xs: set(valT(x) for x in xs), lambda: set(), _doc) def spDict(dct): assert len(dct) == 1, "SpDict :: {keyType: valueType}" keySt, valSt = map(spType, dct.items()[0]) keyT, valT = keySt.handleVal, valSt.handleVal name = "{{{0}: {1}}}".format(keySt.name, valSt.name) _doc = doc.DictDoc(name, "A dict from {0} to {1}".format(keySt.name, valSt.name), deps=[keySt.spDoc, valSt.spDoc]) return SpType(name, lambda dct: dict((keyT(key), valT(val)) for key, val in dct.iteritems()), lambda: {}, _doc) def maybe(val): subType = spType(val) valT = subType.handleVal name = "Optional({0})".format(subType.name) _doc = doc.TypeDoc("Optional", "If present must be of type {0}".format(subType.name), deps=[subType.spDoc]) return SpType(name, valT, lambda: None, _doc) def internal(val): subType = spType(val) valT = subType.handleVal name = "Internal({0})".format(subType.name) _doc = doc.TypeDoc("Internal", "An internal attribute used only for debugging. We strongly recommend that you do not use this attribute in any kind of automation.", deps=[subType.spDoc]) return SpType(name, valT, lambda: None, _doc) def const(constVal): name = js.dumps(constVal) _doc = doc.TypeDoc(name, "The constant value {0}.".format(name)) return SpType(name, lambda val: val if val == constVal else error("Trying to assign a value to const val"), lambda: constVal, _doc) def either(*types): types = map(spType, types) tpNames = ", ".join(t.name for t in types) name = "Either({0})".format(tpNames) _doc = doc.EitherDoc(name, "The value must be of one of the following types: {0}.".format(tpNames), [st.spDoc for st in types]) def handleVal(val): for t in types: try: return t.handleVal(val) except: pass else: error("The value does not match any type") return SpType(name, handleVal, lambda: error("No default value for either type"), _doc) eitherOr = lambda type, default: either(const(default), type) spTypes = { list: spList, set: spSet, dict: spDict, } spDocTypes = { bool: doc.TypeDoc("bool", "true or false."), int: doc.TypeDoc("int", "An integer value."), long: doc.TypeDoc("long", "A long integer value."), str: doc.TypeDoc("string", "A string value."), } def spTypeVal(val): subType = spType(type(val)) name = "{0}, default={1}".format(subType.name, js.dumps(val)) _doc = doc.TypeDoc(name, "A value of type {0}. Default value = {1}.".format(subType.name, val)) return SpType(name, subType.handleVal, lambda: val, _doc) def spTypeFun(argName, validator, argDoc): return SpType(argName, validator, lambda: error("No default value for {argName}", argName=argName), doc.TypeDoc(argName, argDoc)) def spType(tp): if isinstance(tp, SpType): return tp elif isclass(tp) or isfunction(tp): doc = spDocTypes.get(tp, None) if doc is None: doc = tp.spDoc return SpType(tp.__name__, tp, lambda: error("No default value for {type}", type=tp.__name__), doc) else: for _type, _spType in spTypes.iteritems(): if isinstance(tp, _type): return _spType(tp) else: return spTypeVal(tp) PKR¼F¶´{ÀÐÐstorpool/example.py# #- # Copyright (c) 2014 StorPool. # All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ''' Simple StorPool API example script ''' from os import environ from sputils import GB, InvalidArgumentException from spapi import Api, ApiError from sptypes import VolumeCreateDesc api = Api( host=environ.get('SP_API_HTTP_HOST', "127.0.0.1"), port=environ.get('SP_API_HTTP_PORT', 80), auth=environ.get('SP_AUTH_TOKEN', "") ) for diskId, disk in api.disksList().iteritems(): assert disk.id == diskId print "Disk {disk.id:3}: serverId={disk.serverId}, objectsCount={disk.objectsCount}".format(disk=disk) for pgName, pgDesc in api.placementGroupsList().iteritems(): assert pgName == pgDesc.name print "Placement group {pg.name}: servers={pg.servers}, disks={pg.disks}".format(pg=pgDesc) for volume in api.volumesList(): print "Volume {volume.name}: size={volume.size}, replication={volume.replication}, objectsCount={volume.objectsCount}".format(volume=volume) api.volumeCreate({ 'name': 'myTestVol1', 'size': 10 * GB, 'replication': 2, 'placeAll': 'hdd', 'placeTail': 'ssd' }) api.volumeDelete('myTestVol1') desc = VolumeCreateDesc() desc.name = 'myTestVol2' try: desc.size = 1234 except InvalidArgumentException as e: print "Invalid argument:", e desc.size = 10 * GB desc.replication = 2 desc.placeAll = 'hdd' desc.placeTail = 'ssd' api.volumeCreate(desc) vols = api.volumeList(desc.name) assert len(vols) == 1 vol = vols[0] assert vol.name == desc.name assert vol.size == desc.size assert vol.replication == desc.replication assert vol.placeAll == desc.placeAll assert vol.placeTail == desc.placeTail api.volumeDelete(desc.name) try: vols = api.volumeList(desc.name) except ApiError as e: print "API Error:", e PKôm¼Fç¼Ø* * storpool/spconfig.py# #- # Copyright (c) 2013 - 2015 StorPool. # All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # """ StorPool configuration file parser """ import os import subprocess class SPConfigException(Exception): """ An error that occurred during the StorPool configuration parsing. """ class SPConfig(object): def __init__(self, confget='/usr/sbin/storpool_confget', section=None): self._confget = confget self._dict = dict() self._section = section self.confget() def confget(self): args = (self._confget,) if self._section is None else (self._confget, '-s', self._section) confget = str.join(' ', args) try: p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=4096) out = p.communicate() wres = p.wait() except OSError as ose: raise SPConfigException('Could not read the StorPool configuration using {c}: {e}'.format(c=confget, e=ose.strerror)) except Exception as e: raise SPConfigException('Could not read the StorPool configuration using {c}: unexpected exception {t}: {e}'.format(c=confget, t=type(e).__name__, e=e)) if out[1]: out = out[1] err = True else: out = out[0] err = False out = out.replace("\\\n", "") out = filter(lambda s: len(s) > 0, out.split("\n")) if wres > 0: if err: raise SPConfigException('The StorPool configuration helper {c} exited with non-zero code {r}, error messages: {out}'.format(c=confget, r=wres, out=out)) else: raise SPConfigException('The StorPool configuration helper {c} exited with non-zero code {r}'.format(c=confget, r=wres)) elif wres < 0: if err: raise SPConfigException('The StorPool configuration helper {c} was killed by signal {s}, error messages: {out}'.format(c=confget, s=-wres, out=out)) else: raise SPConfigException('The StorPool configuration helper {c} was killed by signal {s}'.format(c=confget, s=-wres)) elif err: raise SPConfigException('The StorPool configuration helper {c} reported errors: {out}'.format(c=confget, out=out)) d = {} for s in out: (key, val) = s.split('=', 1) d[key] = val self._dict = d def __getitem__(self, key): return self._dict[key] def get(self, key, defval): return self._dict.get(key, defval) def __iter__(self): return self.iterkeys() def items(self): return self._dict.items() def keys(self): return self._dict.keys() def iteritems(self): return self._dict.iteritems() def iterkeys(self): return self._dict.iterkeys() PKŠýF^-Ò (storpool-2.0.0.dist-info/DESCRIPTION.rstUNKNOWN PKŠýF"»Ãââ&storpool-2.0.0.dist-info/metadata.json{"extensions": {"python.details": {"contacts": [{"email": "openstack-dev@storpool.com", "name": "Peter Pentchev", "role": "author"}], "document_names": {"description": "DESCRIPTION.rst"}, "project_urls": {"Home": "http://www.storpool.com/"}}}, "generator": "bdist_wheel (0.24.0)", "keywords": ["storpool", "StorPool"], "license": "Apache License 2.0", "metadata_version": "2.0", "name": "storpool", "summary": "Bindings for the StorPool distributed storage API", "version": "2.0.0"}PKŠýF ¢¨Ì /storpool-2.0.0.dist-info/namespace_packages.txtstorpool PKŠýFM]Þ//!storpool-2.0.0.dist-info/pbr.json{"is_release": false, "git_version": "8e85b3f"}PKŠýF ¢¨Ì &storpool-2.0.0.dist-info/top_level.txtstorpool PKŠýF“×2!storpool-2.0.0.dist-info/zip-safe PKŠýF4»´Ø\\storpool-2.0.0.dist-info/WHEELWheel-Version: 1.0 Generator: bdist_wheel (0.24.0) Root-Is-Purelib: true Tag: py2-none-any PKŠýFÙÀb¦((!storpool-2.0.0.dist-info/METADATAMetadata-Version: 2.0 Name: storpool Version: 2.0.0 Summary: Bindings for the StorPool distributed storage API Home-page: http://www.storpool.com/ Author: Peter Pentchev Author-email: openstack-dev@storpool.com License: Apache License 2.0 Keywords: storpool StorPool Platform: UNKNOWN UNKNOWN PKŠýFæ÷BªÕÕstorpool-2.0.0.dist-info/RECORDstorpool-2.0.0-nspkg.pth,sha256=ykvxAGqAoWzQ_-0oJc47CESaFJif90sBlTKHkU-gTNc,311 usr/share/doc/python-storpool/apidoc.html,sha256=6-iZzRddUgJ9j7rXFIchCZiz5jxSXE2uyHEg7QpAViU,274044 storpool-2.0.0.dist-info/top_level.txt,sha256=Mb6rOVp3v0ehOqEheZ5uA-Efs0svED9iB3Mn-fgGtSs,9 storpool-2.0.0.dist-info/DESCRIPTION.rst,sha256=OCTuuN6LcWulhHS3d5rfjdsQtW22n7HENFRh6jC6ego,10 storpool-2.0.0.dist-info/RECORD,, storpool-2.0.0.dist-info/zip-safe,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1 storpool-2.0.0.dist-info/METADATA,sha256=-NaVzDJM0WF3uPC2LWpfOovJFADEE92OVWUyKgI7uZw,296 storpool-2.0.0.dist-info/namespace_packages.txt,sha256=Mb6rOVp3v0ehOqEheZ5uA-Efs0svED9iB3Mn-fgGtSs,9 storpool-2.0.0.dist-info/WHEEL,sha256=54bVun1KfEBTJ68SHUmbxNPj80VxlQ0sHi4gZdGZXEY,92 storpool-2.0.0.dist-info/metadata.json,sha256=kRuTPY5alpDXluQGI8c8XDgFJWTQu5h1xh4G1NBqGBw,482 storpool-2.0.0.dist-info/pbr.json,sha256=KtLKnvAZVsfJapvALoz2ynRe9FWV8fx9s_CoJRCsBh4,47 storpool/spdoc.py,sha256=CAgj4-oCffxYac0DiHT8iFxG0PmlmKQma_1iyOvS-SI,9490 storpool/spapi.py,sha256=Bq8yIJN6EzhdE3bmaLJJmAWdOpmF3Cvze9lVUiyPXB4,23616 storpool/spjson.py,sha256=s21mBdQF0Xb6QXawiV198BxMdo_Zj-YfvJYogqEEB5A,3868 storpool/sptypes.py,sha256=XXh9i9BuMAOYImMNVQH7SDA44NmBPFe9-RNUGZmGTXI,33772 storpool/sputils.py,sha256=jc3i73gYmOqLsNzneu3XtBMeh4e8cIK18TNKAzwOXBw,6105 storpool/example.py,sha256=OOYOxF0cAGc-UUV7BfhLVNkkeXqUAtCfoTlNEz4nLfQ,2256 storpool/spconfig.py,sha256=cqyzsGKN3u7U6uCPf6WXIZCK5S7qcBjcqnGDTogccuo,3370 PKŠýFÃÆT77storpool-2.0.0-nspkg.pthPKŠýF}óØÞ|.|.)musr/share/doc/python-storpool/apidoc.htmlPKR¼FzÅs%%00storpool/spdoc.pyPK°‰ýFÓ¼ÍË@\@\qUstorpool/spapi.pyPK°‰ýF–ªIà±storpool/spjson.pyPK°‰ýFÐtFIìƒìƒ,Ástorpool/sptypes.pyPKR¼F"kŸÙÙIEstorpool/sputils.pyPKR¼F¶´{ÀÐÐS]storpool/example.pyPKôm¼Fç¼Ø* * Tfstorpool/spconfig.pyPKŠýF^-Ò (°sstorpool-2.0.0.dist-info/DESCRIPTION.rstPKŠýF"»Ãââ&tstorpool-2.0.0.dist-info/metadata.jsonPKŠýF ¢¨Ì /&vstorpool-2.0.0.dist-info/namespace_packages.txtPKŠýFM]Þ//!|vstorpool-2.0.0.dist-info/pbr.jsonPKŠýF ¢¨Ì &êvstorpool-2.0.0.dist-info/top_level.txtPKŠýF“×2!7wstorpool-2.0.0.dist-info/zip-safePKŠýF4»´Ø\\wwstorpool-2.0.0.dist-info/WHEELPKŠýFÙÀb¦((!xstorpool-2.0.0.dist-info/METADATAPKŠýFæ÷BªÕÕvystorpool-2.0.0.dist-info/RECORDPKAˆ