Compare commits

..

No commits in common. "master" and "v0.2.3" have entirely different histories.

191 changed files with 8743 additions and 19080 deletions

2
.github/FUNDING.yml vendored
View File

@ -1 +1 @@
github: [AzorianSolutions]
github: [ngoduykhanh]

View File

@ -1,7 +0,0 @@
---
# Reference: https://help.github.com/en/github/building-a-strong-community/configuring-issue-templates-for-your-repository#configuring-the-template-chooser
blank_issues_enabled: false
contact_links:
- name: 📖 Project Update - PLEASE READ!
url: https://github.com/PowerDNS-Admin/PowerDNS-Admin/discussions/1708
about: "Important information about the future of this project"

View File

@ -1,14 +0,0 @@
<!--
Thank you for your interest in contributing to the PowerDNS Admin project! Please note that our contribution
policy requires that a feature request or bug report be approved and assigned prior to opening a pull request.
This helps avoid wasted time and effort on a proposed change that we might want to or be able to accept.
IF YOUR PULL REQUEST DOES NOT REFERENCE AN ISSUE WHICH HAS BEEN ASSIGNED TO YOU, IT WILL BE CLOSED AUTOMATICALLY!
Please specify your assigned issue number on the line below.
-->
### Fixes: #1234
<!--
Please include a summary of the proposed changes below.
-->

15
.github/SUPPORT.md vendored
View File

@ -1,15 +0,0 @@
# PowerDNS Admin
## Project Support
**Looking for help?** PDA has a somewhat active community of fellow users that may be able to provide assistance.
Just [start a discussion](https://github.com/PowerDNS-Admin/PowerDNS-Admin/discussions/new) right here on GitHub!
Looking to chat with someone? Join our [Discord Server](https://discord.powerdnsadmin.org).
Some general tips for engaging here on GitHub:
* Register for a free [GitHub account](https://github.com/signup) if you haven't already.
* You can use [GitHub Markdown](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) for formatting text and adding images.
* To help mitigate notification spam, please avoid "bumping" issues with no activity. (To vote an issue up or down, use a :thumbsup: or :thumbsdown: reaction.)
* Please avoid pinging members with `@` unless they've previously expressed interest or involvement with that particular issue.

View File

@ -1,23 +0,0 @@
---
version: 2
updates:
- package-ecosystem: npm
target-branch: dev
directory: /
schedule:
interval: daily
ignore:
- dependency-name: "*"
update-types: [ "version-update:semver-major" ]
labels:
- 'feature / dependency'
- package-ecosystem: pip
target-branch: dev
directory: /
schedule:
interval: daily
ignore:
- dependency-name: "*"
update-types: [ "version-update:semver-major" ]
labels:
- 'feature / dependency'

98
.github/labels.yml vendored
View File

@ -1,98 +0,0 @@
---
labels:
- name: bug / broken-feature
description: Existing feature malfunctioning or broken
color: 'd73a4a'
- name: bug / security-vulnerability
description: Security vulnerability identified with the application
color: 'd73a4a'
- name: docs / discussion
description: Documentation change proposals
color: '0075ca'
- name: docs / request
description: Documentation change request
color: '0075ca'
- name: feature / dependency
description: Existing feature dependency
color: '008672'
- name: feature / discussion
description: New or existing feature discussion
color: '008672'
- name: feature / request
description: New feature or enhancement request
color: '008672'
- name: feature / update
description: Existing feature modification
color: '008672'
- name: help / deployment
description: Questions regarding application deployment
color: 'd876e3'
- name: help / features
description: Questions regarding the use of application features
color: 'd876e3'
- name: help / other
description: General questions not specific to application deployment or features
color: 'd876e3'
- name: mod / accepted
description: This request has been accepted
color: 'e5ef23'
- name: mod / announcement
description: This is an admin announcement
color: 'e5ef23'
- name: mod / change-request
description: Used by internal developers to indicate a change-request.
color: 'e5ef23'
- name: mod / changes-requested
description: Changes have been requested before proceeding
color: 'e5ef23'
- name: mod / duplicate
description: This issue or pull request already exists
color: 'e5ef23'
- name: mod / good-first-issue
description: Good for newcomers
color: 'e5ef23'
- name: mod / help-wanted
description: Extra attention is needed
color: 'e5ef23'
- name: mod / invalid
description: This doesn't seem right
color: 'e5ef23'
- name: mod / rejected
description: This request has been rejected
color: 'e5ef23'
- name: mod / reviewed
description: This request has been reviewed
color: 'e5ef23'
- name: mod / reviewing
description: This request is being reviewed
color: 'e5ef23'
- name: mod / stale
description: This request has gone stale
color: 'e5ef23'
- name: mod / tested
description: This has been tested
color: 'e5ef23'
- name: mod / testing
description: This is being tested
color: 'e5ef23'
- name: mod / wont-fix
description: This will not be worked on
color: 'e5ef23'
- name: skill / database
description: Requires a database skill-set
color: '5319E7'
- name: skill / docker
description: Requires a Docker skill-set
color: '5319E7'
- name: skill / documentation
description: Requires a documentation skill-set
color: '5319E7'
- name: skill / html
description: Requires a HTML skill-set
color: '5319E7'
- name: skill / javascript
description: Requires a JavaScript skill-set
color: '5319E7'
- name: skill / python
description: Requires a Python skill-set
color: '5319E7'

19
.github/stale.yml vendored Normal file
View File

@ -0,0 +1,19 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 60
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 7
# Issues with these labels will never be considered stale
exemptLabels:
- pinned
- security
- enhancement
- feature request
# Label to use when marking an issue as stale
staleLabel: wontfix
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: true

View File

@ -1,81 +0,0 @@
---
name: 'Docker Image'
on:
workflow_dispatch:
push:
branches:
- 'dev'
- 'master'
tags:
- 'v*.*.*'
paths-ignore:
- .github/**
- deploy/**
- docker-test/**
- docs/**
- .dockerignore
- .gitattributes
- .gitignore
- .lgtm.yml
- .whitesource
- .yarnrc
- docker-compose.yml
- docker-compose-test.yml
- LICENSE
- README.md
- SECURITY.md
jobs:
build-and-push-docker-image:
name: Build Docker Image
runs-on: ubuntu-latest
steps:
- name: Repository Checkout
uses: actions/checkout@v2
- name: Docker Image Metadata
id: meta
uses: docker/metadata-action@v3
with:
images: |
powerdnsadmin/pda-legacy
tags: |
type=ref,event=tag
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
- name: QEMU Setup
uses: docker/setup-qemu-action@v2
- name: Docker Buildx Setup
id: buildx
uses: docker/setup-buildx-action@v1
- name: Docker Hub Authentication
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME_V2 }}
password: ${{ secrets.DOCKERHUB_TOKEN_V2 }}
- name: Docker Image Build
uses: docker/build-push-action@v4
with:
platforms: linux/amd64,linux/arm64
context: ./
file: ./docker/Dockerfile
push: true
tags: powerdnsadmin/pda-legacy:${{ github.ref_name }}
- name: Docker Image Release Tagging
uses: docker/build-push-action@v4
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
with:
platforms: linux/amd64,linux/arm64
context: ./
file: ./docker/Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@ -1,134 +0,0 @@
---
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
workflow_dispatch:
push:
branches:
- 'dev'
- 'main'
- 'master'
- 'dependabot/**'
- 'feature/**'
- 'issue/**'
paths-ignore:
- .github/**
- deploy/**
- docker/**
- docker-test/**
- docs/**
- powerdnsadmin/static/assets/**
- powerdnsadmin/static/custom/css/**
- powerdnsadmin/static/img/**
- powerdnsadmin/swagger-spec.yaml
- .dockerignore
- .gitattributes
- .gitignore
- .lgtm.yml
- .whitesource
- .yarnrc
- docker-compose.yml
- docker-compose-test.yml
- LICENSE
- package.json
- README.md
- requirements.txt
- SECURITY.md
- yarn.lock
pull_request:
# The branches below must be a subset of the branches above
branches:
- 'dev'
- 'main'
- 'master'
- 'dependabot/**'
- 'feature/**'
- 'issue/**'
paths-ignore:
- .github/**
- deploy/**
- docker/**
- docker-test/**
- docs/**
- powerdnsadmin/static/assets/**
- powerdnsadmin/static/custom/css/**
- powerdnsadmin/static/img/**
- powerdnsadmin/swagger-spec.yaml
- .dockerignore
- .gitattributes
- .gitignore
- .lgtm.yml
- .whitesource
- .yarnrc
- docker-compose.yml
- docker-compose-test.yml
- LICENSE
- package.json
- README.md
- requirements.txt
- SECURITY.md
- yarn.lock
schedule:
- cron: '45 2 * * 2'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'javascript', 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@v3
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# Details on CodeQL's query packs refer to : https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v2
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2

View File

@ -1,24 +0,0 @@
---
# lock-threads (https://github.com/marketplace/actions/lock-threads)
name: 'Lock threads'
on:
schedule:
- cron: '0 3 * * *'
workflow_dispatch:
permissions:
issues: write
pull-requests: write
jobs:
lock:
runs-on: ubuntu-latest
steps:
- uses: dessant/lock-threads@v3
with:
issue-inactive-days: 90
pr-inactive-days: 30
issue-lock-reason: 'resolved'
exclude-any-issue-labels: 'bug / security-vulnerability, mod / announcement, mod / accepted, mod / reviewing, mod / testing'
exclude-any-pr-labels: 'bug / security-vulnerability, mod / announcement, mod / accepted, mod / reviewing, mod / testing'

View File

@ -1,92 +0,0 @@
---
# MegaLinter GitHub Action configuration file
# More info at https://megalinter.io
name: MegaLinter
on:
workflow_dispatch:
push:
branches-ignore:
- "*"
- "dev"
- "main"
- "master"
- "dependabot/**"
- "feature/**"
- "issues/**"
- "release/**"
env: # Comment env block if you do not want to apply fixes
# Apply linter fixes configuration
APPLY_FIXES: all # When active, APPLY_FIXES must also be defined as environment variable (in github/workflows/mega-linter.yml or other CI tool)
APPLY_FIXES_EVENT: all # Decide which event triggers application of fixes in a commit or a PR (pull_request, push, all)
APPLY_FIXES_MODE: pull_request # If APPLY_FIXES is used, defines if the fixes are directly committed (commit) or posted in a PR (pull_request)
concurrency:
group: ${{ github.ref }}-${{ github.workflow }}
cancel-in-progress: true
jobs:
build:
name: MegaLinter
runs-on: ubuntu-latest
steps:
# Git Checkout
- name: Checkout Code
uses: actions/checkout@v3
with:
token: ${{ secrets.PAT || secrets.GITHUB_TOKEN }}
# MegaLinter
- name: MegaLinter
id: ml
# You can override MegaLinter flavor used to have faster performances
# More info at https://megalinter.io/flavors/
uses: oxsecurity/megalinter@v6
env:
# All available variables are described in documentation
# https://megalinter.io/configuration/
VALIDATE_ALL_CODEBASE: true # Validates all source when push on main, else just the git diff with main. Override with true if you always want to lint all sources
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PAT: ${{ secrets.PAT }}
# ADD YOUR CUSTOM ENV VARIABLES HERE OR DEFINE THEM IN A FILE .mega-linter.yml AT THE ROOT OF YOUR REPOSITORY
# DISABLE: COPYPASTE,SPELL # Uncomment to disable copy-paste and spell checks
# Upload MegaLinter artifacts
- name: Archive production artifacts
if: ${{ success() }} || ${{ failure() }}
uses: actions/upload-artifact@v3
with:
name: MegaLinter reports
path: |
megalinter-reports
mega-linter.log
# Create pull request if applicable (for now works only on PR from same repository, not from forks)
- name: Create PR with applied fixes
id: cpr
if: steps.ml.outputs.has_updated_sources == 1 && (env.APPLY_FIXES_EVENT == 'all' || env.APPLY_FIXES_EVENT == github.event_name) && env.APPLY_FIXES_MODE == 'pull_request' && (github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository)
uses: peter-evans/create-pull-request@v4
with:
token: ${{ secrets.PAT || secrets.GITHUB_TOKEN }}
commit-message: "[MegaLinter] Apply linters automatic fixes"
title: "[MegaLinter] Apply linters automatic fixes"
labels: bot
- name: Create PR output
if: steps.ml.outputs.has_updated_sources == 1 && (env.APPLY_FIXES_EVENT == 'all' || env.APPLY_FIXES_EVENT == github.event_name) && env.APPLY_FIXES_MODE == 'pull_request' && (github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository)
run: |
echo "Pull Request Number - ${{ steps.cpr.outputs.pull-request-number }}"
echo "Pull Request URL - ${{ steps.cpr.outputs.pull-request-url }}"
# Push new commit if applicable (for now works only on PR from same repository, not from forks)
- name: Prepare commit
if: steps.ml.outputs.has_updated_sources == 1 && (env.APPLY_FIXES_EVENT == 'all' || env.APPLY_FIXES_EVENT == github.event_name) && env.APPLY_FIXES_MODE == 'commit' && github.ref != 'refs/heads/main' && (github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository)
run: sudo chown -Rc $UID .git/
- name: Commit and push applied linter fixes
if: steps.ml.outputs.has_updated_sources == 1 && (env.APPLY_FIXES_EVENT == 'all' || env.APPLY_FIXES_EVENT == github.event_name) && env.APPLY_FIXES_MODE == 'commit' && github.ref != 'refs/heads/main' && (github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository)
uses: stefanzweifel/git-auto-commit-action@v4
with:
branch: ${{ github.event.pull_request.head.ref || github.head_ref || github.ref }}
commit_message: "[MegaLinter] Apply linters fixes"

View File

@ -1,46 +0,0 @@
# close-stale-issues (https://github.com/marketplace/actions/close-stale-issues)
name: 'Close Stale Threads'
on:
schedule:
- cron: '0 4 * * *'
workflow_dispatch:
permissions:
issues: write
pull-requests: write
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v6
with:
close-issue-message: >
This issue has been automatically closed due to lack of activity. In an
effort to reduce noise, please do not comment any further. Note that the
core maintainers may elect to reopen this issue at a later date if deemed
necessary.
close-pr-message: >
This PR has been automatically closed due to lack of activity.
days-before-stale: 90
days-before-close: 30
exempt-issue-labels: 'bug / security-vulnerability, mod / announcement, mod / accepted, mod / reviewing, mod / testing'
operations-per-run: 100
remove-stale-when-updated: false
stale-issue-label: 'mod / stale'
stale-issue-message: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. PDA
is governed by a small group of core maintainers which means not all opened
issues may receive direct feedback. **Do not** attempt to circumvent this
process by "bumping" the issue; doing so will result in its immediate closure
and you may be barred from participating in any future discussions. Please see our
[Contribution Guide](https://github.com/PowerDNS-Admin/PowerDNS-Admin/blob/master/docs/CONTRIBUTING.md).
stale-pr-label: 'mod / stale'
stale-pr-message: >
This PR has been automatically marked as stale because it has not had
recent activity. It will be closed automatically if no further action is
taken. Please see our
[Contribution Guide](https://github.com/PowerDNS-Admin/PowerDNS-Admin/blob/master/docs/CONTRIBUTING.md).

4
.gitignore vendored
View File

@ -1,5 +1,3 @@
flask_session
# gedit
*~
@ -40,7 +38,5 @@ node_modules
powerdnsadmin/static/generated
.webassets-cache
.venv*
venv*
.pytest_cache
.DS_Store
yarn-error.log

5
.travis.yml Normal file
View File

@ -0,0 +1,5 @@
language: minimal
script:
- docker-compose -f docker-compose-test.yml up --exit-code-from powerdns-admin --abort-on-container-exit
services:
- docker

View File

@ -1,7 +1,6 @@
The MIT License (MIT)
Copyright (c) 2016 Khanh Ngo - ngokhanhit[at]gmail.com
Copyright (c) 2022 Azorian Solutions - legal[at]azorian.solutions
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@ -1,67 +1,47 @@
# PowerDNS-Admin
A PowerDNS web interface with advanced features.
[![CodeQL](https://github.com/PowerDNS-Admin/PowerDNS-Admin/actions/workflows/codeql-analysis.yml/badge.svg?branch=master)](https://github.com/PowerDNS-Admin/PowerDNS-Admin/actions/workflows/codeql-analysis.yml)
[![Docker Image](https://github.com/PowerDNS-Admin/PowerDNS-Admin/actions/workflows/build-and-publish.yml/badge.svg?branch=master)](https://github.com/PowerDNS-Admin/PowerDNS-Admin/actions/workflows/build-and-publish.yml)
[![Build Status](https://travis-ci.org/ngoduykhanh/PowerDNS-Admin.svg?branch=master)](https://travis-ci.org/ngoduykhanh/PowerDNS-Admin)
[![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/ngoduykhanh/PowerDNS-Admin.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/ngoduykhanh/PowerDNS-Admin/context:python)
[![Language grade: JavaScript](https://img.shields.io/lgtm/grade/javascript/g/ngoduykhanh/PowerDNS-Admin.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/ngoduykhanh/PowerDNS-Admin/context:javascript)
#### Features:
- Provides forward and reverse zone management
- Provides zone templating features
- Provides user management with role based access control
- Provides zone specific access control
- Provides activity logging
- Authentication:
- Local User Support
- SAML Support
- LDAP Support: OpenLDAP / Active Directory
- OAuth Support: Google / GitHub / Azure / OpenID
- Two-factor authentication support (TOTP)
- PDNS Service Configuration & Statistics Monitoring
- Multiple domain management
- Domain template
- User management
- User access management based on domain
- User activity logging
- Support Local DB / SAML / LDAP / Active Directory user authentication
- Support Google / Github / Azure / OpenID OAuth
- Support Two-factor authentication (TOTP)
- Dashboard and pdns service statistics
- DynDNS 2 protocol support
- Easy IPv6 PTR record editing
- Provides an API for zone and record management among other features
- Provides full IDN/Punycode support
## [Project Update - PLEASE READ!!!](https://github.com/PowerDNS-Admin/PowerDNS-Admin/discussions/1708)
- Edit IPv6 PTRs using IPv6 addresses directly (no more editing of literal addresses!)
- Limited API for manipulating zones and records
## Running PowerDNS-Admin
There are several ways to run PowerDNS-Admin. The quickest way is to use Docker.
If you are looking to install and run PowerDNS-Admin directly onto your system, check out
the [wiki](https://github.com/PowerDNS-Admin/PowerDNS-Admin/blob/master/docs/wiki/) for ways to do that.
There are several ways to run PowerDNS-Admin. The easiest way is to use Docker.
If you are looking to install and run PowerDNS-Admin directly onto your system check out the [Wiki](https://github.com/ngoduykhanh/PowerDNS-Admin/wiki#installation-guides) for ways to do that.
### Docker
Here are two options to run PowerDNS-Admin using Docker.
To get started as quickly as possible, try option 1. If you want to make modifications to the configuration option 2 may
be cleaner.
This are two options to run PowerDNS-Admin using Docker.
To get started as quickly as possible try option 1. If you want to make modifications to the configuration option 2 may be cleaner.
#### Option 1: From Docker Hub
To run the application using the latest stable release on Docker Hub, run the following command:
The easiest is to just run the latest Docker image from Docker Hub:
```
$ docker run -d \
-e SECRET_KEY='a-very-secret-key' \
-v pda-data:/data \
-p 9191:80 \
powerdnsadmin/pda-legacy:latest
ngoduykhanh/powerdns-admin:latest
```
This creates a volume named `pda-data` to persist the default SQLite database with app configuration.
This creates a volume called `pda-data` to persist the SQLite database with the configuration.
#### Option 2: Using docker-compose
1. Update the configuration
Edit the `docker-compose.yml` file to update the database connection string in `SQLALCHEMY_DATABASE_URI`.
Other environment variables are mentioned in
the [AppSettings.defaults](https://github.com/PowerDNS-Admin/PowerDNS-Admin/blob/master/powerdnsadmin/lib/settings.py) dictionary.
To use a Docker-style secrets convention, one may append `_FILE` to the environment variables with a path to a file
containing the intended value of the variable (e.g. `SQLALCHEMY_DATABASE_URI_FILE=/run/secrets/db_uri`).
Make sure to set the environment variable `SECRET_KEY` to a long, random
string (https://flask.palletsprojects.com/en/1.1.x/config/#SECRET_KEY)
Other environment variables are mentioned in the [legal_envvars](https://github.com/ngoduykhanh/PowerDNS-Admin/blob/master/configs/docker_config.py#L5-L46).
To use the Docker secrets feature it is possible to append `_FILE` to the environment variables and point to a file with the values stored in it.
2. Start docker container
```
@ -71,28 +51,12 @@ This creates a volume named `pda-data` to persist the default SQLite database wi
You can then access PowerDNS-Admin by pointing your browser to http://localhost:9191.
## Screenshots
![dashboard](https://user-images.githubusercontent.com/6447444/44068603-0d2d81f6-9fa5-11e8-83af-14e2ad79e370.png)
![dashboard](docs/screenshots/dashboard.png)
## LICENSE
MIT. See [LICENSE](https://github.com/ngoduykhanh/PowerDNS-Admin/blob/master/LICENSE)
## Support
If you like the project and want to support it, you can *buy me a coffee*
**Looking for help?** Try taking a look at the project's
[Support Guide](https://github.com/PowerDNS-Admin/PowerDNS-Admin/blob/master/.github/SUPPORT.md) or joining
our [Discord Server](https://discord.powerdnsadmin.org).
## Security Policy
Please see our [Security Policy](https://github.com/PowerDNS-Admin/PowerDNS-Admin/blob/master/SECURITY.md).
## Contributing
Please see our [Contribution Guide](https://github.com/PowerDNS-Admin/PowerDNS-Admin/blob/master/docs/CONTRIBUTING.md).
## Code of Conduct
Please see our [Code of Conduct Policy](https://github.com/PowerDNS-Admin/PowerDNS-Admin/blob/master/docs/CODE_OF_CONDUCT.md).
## License
This project is released under the MIT license. For additional
information, [see the full license](https://github.com/PowerDNS-Admin/PowerDNS-Admin/blob/master/LICENSE).
<a href="https://www.buymeacoffee.com/khanhngo" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" height="41" width="174"></a>

View File

@ -1,31 +0,0 @@
# Security Policy
## No Warranty
Per the terms of the MIT license, PDA is offered "as is" and without any guarantee or warranty pertaining to its operation. While every reasonable effort is made by its maintainers to ensure the product remains free of security vulnerabilities, users are ultimately responsible for conducting their own evaluations of each software release.
## Recommendations
Administrators are encouraged to adhere to industry best practices concerning the secure operation of software, such as:
* Do not expose your PDA installation to the public Internet
* Do not permit multiple users to share an account
* Enforce minimum password complexity requirements for local accounts
* Prohibit access to your database from clients other than the PDA application
* Keep your deployment updated to the most recent stable release
## Reporting a Suspected Vulnerability
If you believe you've uncovered a security vulnerability and wish to report it confidentially, you may do so via email. Please note that any reported vulnerabilities **MUST** meet all the following conditions:
* Affects the most recent stable release of PDA, or a current beta release
* Affects a PDA instance installed and configured per the official documentation
* Is reproducible following a prescribed set of instructions
Please note that we **DO NOT** accept reports generated by automated tooling which merely suggest that a file or file(s) _may_ be vulnerable under certain conditions, as these are most often innocuous.
If you believe that you've found a vulnerability which meets all of these conditions, please [submit a draft security advisory](https://github.com/PowerDNS-Admin/PowerDNS-Admin/security/advisories/new) on GitHub, or email a brief description of the suspected bug and instructions for reproduction to **admin@powerdnsadmin.org**.
### Bug Bounties
As PDA is provided as free open source software, we do not offer any monetary compensation for vulnerability or bug reports, however your contributions are greatly appreciated.

View File

@ -1 +0,0 @@
0.4.2

View File

@ -1,13 +1,12 @@
import os
#import urllib.parse
basedir = os.path.abspath(os.path.dirname(__file__))
basedir = os.path.abspath(os.path.abspath(os.path.dirname(__file__)))
### BASIC APP CONFIG
SALT = '$2b$12$yLUMTIfl21FKJQpTkRQXCu'
SECRET_KEY = 'e951e5a1f4b94151b360f47edf596dd2'
BIND_ADDRESS = '0.0.0.0'
PORT = 9191
SERVER_EXTERNAL_SSL = os.getenv('SERVER_EXTERNAL_SSL', None)
OFFLINE_MODE = False
### DATABASE CONFIG
SQLA_DB_USER = 'pda'
@ -16,34 +15,8 @@ SQLA_DB_HOST = '127.0.0.1'
SQLA_DB_NAME = 'pda'
SQLALCHEMY_TRACK_MODIFICATIONS = True
#CAPTCHA Config
CAPTCHA_ENABLE = True
CAPTCHA_LENGTH = 6
CAPTCHA_WIDTH = 160
CAPTCHA_HEIGHT = 60
CAPTCHA_SESSION_KEY = 'captcha_image'
#Server side sessions tracking
#Set to TRUE for CAPTCHA, or enable another stateful session tracking system
SESSION_TYPE = 'sqlalchemy'
### DATABASE - MySQL
## Don't forget to uncomment the import in the top
#SQLALCHEMY_DATABASE_URI = 'mysql://{}:{}@{}/{}'.format(
# urllib.parse.quote_plus(SQLA_DB_USER),
# urllib.parse.quote_plus(SQLA_DB_PASSWORD),
# SQLA_DB_HOST,
# SQLA_DB_NAME
#)
### DATABASE - PostgreSQL
## Don't forget to uncomment the import in the top
#SQLALCHEMY_DATABASE_URI = 'postgres://{}:{}@{}/{}'.format(
# urllib.parse.quote_plus(SQLA_DB_USER),
# urllib.parse.quote_plus(SQLA_DB_PASSWORD),
# SQLA_DB_HOST,
# SQLA_DB_NAME
#)
# SQLALCHEMY_DATABASE_URI = 'mysql://' + SQLA_DB_USER + ':' + SQLA_DB_PASSWORD + '@' + SQLA_DB_HOST + '/' + SQLA_DB_NAME
### DATABASE - SQLite
SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(basedir, 'pdns.db')
@ -134,14 +107,6 @@ SAML_ENABLED = False
# ### the user is set as a non-administrator user.
# #SAML_ATTRIBUTE_ADMIN = 'https://example.edu/pdns-admin'
## Attribute to get admin status for groups with the IdP
# ### Default: Don't set administrator group with SAML attributes
#SAML_GROUP_ADMIN_NAME = 'GroupName'
## Attribute to get operator status for groups with the IdP
# ### Default: Don't set operator group with SAML attributes
#SAML_GROUP_OPERATOR_NAME = 'GroupName'
# ## Attribute to get account names from
# ### Default: Don't control accounts with SAML attribute
# ### If set, the user will be added and removed from accounts to match
@ -149,16 +114,6 @@ SAML_ENABLED = False
# ### be created and the user added to them.
# SAML_ATTRIBUTE_ACCOUNT = 'https://example.edu/pdns-account'
# ## Attribute name that aggregates group names
# ### Default: Don't collect IdP groups from SAML group attributes
# ### In Okta, you can assign administrators by group using "Group Attribute Statements."
# ### In this case, the SAML_ATTRIBUTE_GROUP will be the attribute name for a collection of
# ### groups passed in the SAML assertion. From there, you can specify a SAML_GROUP_ADMIN_NAME.
# ### If the user is a member of this group, and that group name is included in the collection,
# ### the user will be set as an administrator.
# #SAML_ATTRIBUTE_GROUP = 'https://example.edu/pdns-groups'
# #SAML_GROUP_ADMIN_NAME = 'PowerDNSAdmin-Administrators'
# SAML_SP_ENTITY_ID = 'http://<SAML SP Entity ID>'
# SAML_SP_CONTACT_NAME = '<contact name>'
# SAML_SP_CONTACT_MAIL = '<contact mail>'
@ -172,8 +127,8 @@ SAML_ENABLED = False
# CAUTION: For production use, usage of self-signed certificates it's highly discouraged.
# Use certificates from trusted CA instead
# ###########################################################################################
# SAML_CERT = '/etc/pki/powerdns-admin/cert.crt'
# SAML_KEY = '/etc/pki/powerdns-admin/key.pem'
# SAML_CERT_FILE = '/etc/pki/powerdns-admin/cert.crt'
# SAML_CERT_KEY = '/etc/pki/powerdns-admin/key.pem'
# Configures if SAML tokens should be encrypted.
# SAML_SIGN_REQUEST = False
@ -187,10 +142,6 @@ SAML_ENABLED = False
# #SAML_ASSERTION_ENCRYPTED = True
# Some IdPs, like Okta, do not return Attribute Statements by default
# Set the following to False if you are using Okta and not manually configuring Attribute Statements
# #SAML_WANT_ATTRIBUTE_STATEMENT = True
# Remote authentication settings
# Whether to enable remote user authentication or not

View File

@ -1,2 +1,102 @@
# Defaults for Docker image
BIND_ADDRESS = '0.0.0.0'
PORT = 80
SQLALCHEMY_DATABASE_URI = 'sqlite:////data/powerdns-admin.db'
legal_envvars = (
'SECRET_KEY',
'BIND_ADDRESS',
'PORT',
'LOG_LEVEL',
'SALT',
'SQLALCHEMY_TRACK_MODIFICATIONS',
'SQLALCHEMY_DATABASE_URI',
'MAIL_SERVER',
'MAIL_PORT',
'MAIL_DEBUG',
'MAIL_USE_TLS',
'MAIL_USE_SSL',
'MAIL_USERNAME',
'MAIL_PASSWORD',
'MAIL_DEFAULT_SENDER',
'SAML_ENABLED',
'SAML_DEBUG',
'SAML_PATH',
'SAML_METADATA_URL',
'SAML_METADATA_CACHE_LIFETIME',
'SAML_IDP_SSO_BINDING',
'SAML_IDP_ENTITY_ID',
'SAML_NAMEID_FORMAT',
'SAML_ATTRIBUTE_EMAIL',
'SAML_ATTRIBUTE_GIVENNAME',
'SAML_ATTRIBUTE_SURNAME',
'SAML_ATTRIBUTE_NAME',
'SAML_ATTRIBUTE_USERNAME',
'SAML_ATTRIBUTE_ADMIN',
'SAML_ATTRIBUTE_GROUP',
'SAML_GROUP_ADMIN_NAME',
'SAML_GROUP_TO_ACCOUNT_MAPPING',
'SAML_ATTRIBUTE_ACCOUNT',
'SAML_SP_ENTITY_ID',
'SAML_SP_CONTACT_NAME',
'SAML_SP_CONTACT_MAIL',
'SAML_SIGN_REQUEST',
'SAML_WANT_MESSAGE_SIGNED',
'SAML_LOGOUT',
'SAML_LOGOUT_URL',
'SAML_ASSERTION_ENCRYPTED',
'OFFLINE_MODE',
'REMOTE_USER_LOGOUT_URL',
'REMOTE_USER_COOKIES'
)
legal_envvars_int = ('PORT', 'MAIL_PORT', 'SAML_METADATA_CACHE_LIFETIME')
legal_envvars_bool = (
'SQLALCHEMY_TRACK_MODIFICATIONS',
'HSTS_ENABLED',
'MAIL_DEBUG',
'MAIL_USE_TLS',
'MAIL_USE_SSL',
'SAML_ENABLED',
'SAML_DEBUG',
'SAML_SIGN_REQUEST',
'SAML_WANT_MESSAGE_SIGNED',
'SAML_LOGOUT',
'SAML_ASSERTION_ENCRYPTED',
'OFFLINE_MODE',
'REMOTE_USER_ENABLED'
)
# import everything from environment variables
import os
import sys
def str2bool(v):
return v.lower() in ("true", "yes", "1")
for v in legal_envvars:
ret = None
# _FILE suffix will allow to read value from file, usefull for Docker's
# secrets feature
if v + '_FILE' in os.environ:
if v in os.environ:
raise AttributeError(
"Both {} and {} are set but are exclusive.".format(
v, v + '_FILE'))
with open(os.environ[v + '_FILE']) as f:
ret = f.read()
f.close()
elif v in os.environ:
ret = os.environ[v]
if ret is not None:
if v in legal_envvars_bool:
ret = str2bool(ret)
if v in legal_envvars_int:
ret = int(ret)
sys.modules[__name__].__dict__[v] = ret

View File

@ -1,5 +1,5 @@
import os
basedir = os.path.abspath(os.path.dirname(__file__))
basedir = os.path.abspath(os.path.abspath(os.path.dirname(__file__)))
### BASIC APP CONFIG
SALT = '$2b$12$yLUMTIfl21FKJQpTkRQXCu'

View File

@ -1,16 +0,0 @@
#!/bin/bash
# Create a new group for PowerDNS-Admin
groupadd powerdnsadmin
# Create a user for PowerDNS-Admin
useradd --system -g powerdnsadmin powerdnsadmin
# Make the new user and group the owners of the PowerDNS-Admin files
chown -R powerdnsadmin:powerdnsadmin /opt/web/powerdns-admin
# Start the PowerDNS-Admin service
systemctl start powerdns-admin
# Enable the PowerDNS-Admin service to start automatically at boot
systemctl enable powerdns-admin

View File

@ -1,16 +0,0 @@
@echo off
rem Create a new group for PowerDNS-Admin
net localgroup powerdnsadmin /add
rem Create a user for PowerDNS-Admin
net user powerdnsadmin /add /passwordchg:no /homedir:nul /active:yes /expires:never /passwordreq:no /s
rem Make the new user and group the owners of the PowerDNS-Admin files
icacls "C:\path\to\powerdns-admin" /setowner "powerdnsadmin"
rem Start the PowerDNS-Admin service
net start powerdns-admin
rem Enable the PowerDNS-Admin service to start automatically at boot
sc config powerdns-admin start= auto

View File

@ -1,15 +0,0 @@
version: '3.3'
services:
core:
image: powerdnsadmin/pda-legacy:latest
restart: unless-stopped
environment:
- SECRET_KEY=INSECURE-CHANGE-ME-9I0DAtfkfj5JmBkPSaHah3ECAa8Df5KK
ports:
- "12000:9191"
volumes:
- "core_data:/data"
volumes:
core_data:

View File

@ -1,2 +0,0 @@
# Kubernetes
Example and simplified deployment for kubernetes.

View File

@ -1,8 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: powerdnsadmin-env
data:
FLASK_APP: powerdnsadmin/__init__.py
SECRET_KEY: changeme_secret
SQLALCHEMY_DATABASE_URI: 'mysql://user:password@host/database'

View File

@ -1,29 +0,0 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: powerdnsadmin
labels:
app: powerdnsadmin
spec:
strategy:
type: RollingUpdate
replicas: 1
selector:
matchLabels:
app: powerdnsadmin
template:
metadata:
labels:
app: powerdnsadmin
spec:
containers:
- name: powerdnsadmin
image: powerdnsadmin/pda-legacy
ports:
- containerPort: 80
protocol: TCP
envFrom:
- configMapRef:
name: powerdnsadmin-env
imagePullPolicy: Always
restartPolicy: Always

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: powerdnsadmin
namespace: powerdnsadmin
labels:
app: powerdnsadmin
spec:
ports:
- name: http
port: 80
targetPort: 80
selector:
app: powerdnsadmin

View File

@ -1,11 +1,11 @@
version: "3.8"
version: "2.1"
services:
powerdns-admin:
image: powerdns-admin-test
build:
context: .
dockerfile: docker-test/Dockerfile
image: powerdns-admin-test
container_name: powerdns-admin-test
ports:
- "9191:80"
@ -17,10 +17,10 @@ services:
- pdns-server
pdns-server:
image: pdns-server-test
build:
context: .
dockerfile: docker-test/Dockerfile.pdns
image: pdns-server-test
ports:
- "5053:53"
- "5053:53/udp"

View File

@ -2,7 +2,7 @@ version: "3"
services:
app:
image: powerdnsadmin/pda-legacy:latest
image: ngoduykhanh/powerdns-admin:latest
container_name: powerdns_admin
ports:
- "9191:80"
@ -15,3 +15,4 @@ services:
- GUNICORN_TIMEOUT=60
- GUNICORN_WORKERS=2
- GUNICORN_LOGLEVEL=DEBUG
- OFFLINE_MODE=False # True for offline, False for external resources

View File

@ -1,36 +1,15 @@
FROM debian:bullseye-slim
FROM debian:stretch-slim
LABEL maintainer="k@ndk.name"
ENV LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 LANGUAGE=en_US.UTF-8
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends \
apt-transport-https \
curl \
build-essential \
libffi-dev \
libldap2-dev \
libmariadb-dev-compat \
libpq-dev \
libsasl2-dev \
libssl-dev \
libxml2-dev \
libxmlsec1-dev \
libxmlsec1-openssl \
libxslt1-dev \
locales \
locales-all \
pkg-config \
python3-dev \
python3-pip \
python3-setuptools \
&& curl -sL https://deb.nodesource.com/setup_lts.x | bash - \
&& apt-get install -y --no-install-recommends apt-transport-https locales locales-all python3-pip python3-setuptools python3-dev curl libsasl2-dev libldap2-dev libssl-dev libxml2-dev libxslt1-dev libxmlsec1-dev libffi-dev build-essential libmariadb-dev-compat \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash - \
&& curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& apt-get update -y \
&& apt-get install -y --no-install-recommends \
nodejs \
yarn \
&& apt-get install -y nodejs yarn \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
@ -42,6 +21,8 @@ RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
COPY . /app
COPY ./docker/entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/entrypoint.sh
ENV FLASK_APP=powerdnsadmin/__init__.py
RUN yarn install --pure-lockfile --production \
@ -50,4 +31,4 @@ RUN yarn install --pure-lockfile --production \
COPY ./docker-test/wait-for-pdns.sh /opt
RUN chmod u+x /opt/wait-for-pdns.sh
CMD ["/opt/wait-for-pdns.sh", "/usr/local/bin/pytest", "-W", "ignore::DeprecationWarning", "--capture=no", "-vv"]
CMD ["/opt/wait-for-pdns.sh", "/usr/local/bin/pytest","--capture=no","-vv"]

View File

@ -10,9 +10,9 @@ fi
# Import schema structure
if [ -e "/data/pdns.sql" ]; then
rm -f /data/pdns.db
rm /data/pdns.db
cat /data/pdns.sql | sqlite3 /data/pdns.db
rm -f /data/pdns.sql
rm /data/pdns.sql
echo "Imported schema structure"
fi

View File

@ -1,16 +1,14 @@
FROM alpine:3.17 AS builder
FROM alpine:3.12 AS builder
LABEL maintainer="k@ndk.name"
ARG BUILD_DEPENDENCIES="build-base \
libffi-dev \
libpq-dev \
libxml2-dev \
mariadb-connector-c-dev \
openldap-dev \
python3-dev \
xmlsec-dev \
npm \
yarn \
cargo"
yarn"
ENV LC_ALL=en_US.UTF-8 \
LANG=en_US.UTF-8 \
@ -31,7 +29,7 @@ COPY ./requirements.txt /build/requirements.txt
# Get application dependencies
RUN pip install --upgrade pip && \
pip install --use-pep517 -r requirements.txt
pip install -r requirements.txt
# Add sources
COPY . /build
@ -39,7 +37,7 @@ COPY . /build
# Prepare assets
RUN yarn install --pure-lockfile --production && \
yarn cache clean && \
sed -i -r -e "s|'rcssmin',\s?'cssrewrite'|'rcssmin'|g" /build/powerdnsadmin/assets.py && \
sed -i -r -e "s|'cssmin',\s?'cssrewrite'|'cssmin'|g" /build/powerdnsadmin/assets.py && \
flask assets build
RUN mv /build/powerdnsadmin/static /tmp/static && \
@ -47,7 +45,6 @@ RUN mv /build/powerdnsadmin/static /tmp/static && \
cp -r /tmp/static/generated /build/powerdnsadmin/static && \
cp -r /tmp/static/assets /build/powerdnsadmin/static && \
cp -r /tmp/static/img /build/powerdnsadmin/static && \
find /tmp/static/node_modules -name 'webfonts' -exec cp -r {} /build/powerdnsadmin/static \; && \
find /tmp/static/node_modules -name 'fonts' -exec cp -r {} /build/powerdnsadmin/static \; && \
find /tmp/static/node_modules/icheck/skins/square -name '*.png' -exec cp {} /build/powerdnsadmin/static/generated \;
@ -67,13 +64,21 @@ RUN mkdir -p /app && \
mkdir -p /app/configs && \
cp -r /build/configs/docker_config.py /app/configs
# Cleanup
RUN pip install pip-autoremove && \
pip-autoremove cssmin -y && \
pip-autoremove jsmin -y && \
pip-autoremove pytest -y && \
pip uninstall -y pip-autoremove && \
apk del ${BUILD_DEPENDENCIES}
# Build image
FROM alpine:3.17
FROM alpine:3.12
ENV FLASK_APP=/app/powerdnsadmin/__init__.py \
USER=pda
RUN apk add --no-cache mariadb-connector-c postgresql-client py3-gunicorn py3-pyldap py3-flask py3-psycopg2 xmlsec tzdata libcap && \
RUN apk add --no-cache mariadb-connector-c postgresql-client py3-gunicorn py3-psycopg2 xmlsec tzdata libcap && \
addgroup -S ${USER} && \
adduser -S -D -G ${USER} ${USER} && \
mkdir /data && \
@ -82,16 +87,16 @@ RUN apk add --no-cache mariadb-connector-c postgresql-client py3-gunicorn py3-py
apk del libcap
COPY --from=builder /usr/bin/flask /usr/bin/
COPY --from=builder /usr/lib/python3.10/site-packages /usr/lib/python3.10/site-packages/
COPY --from=builder /usr/lib/python3.8/site-packages /usr/lib/python3.8/site-packages/
COPY --from=builder --chown=root:${USER} /app /app/
COPY ./docker/entrypoint.sh /usr/bin/
WORKDIR /app
RUN chown ${USER}:${USER} ./configs /app && \
RUN chown ${USER}:${USER} ./configs && \
cat ./powerdnsadmin/default_config.py ./configs/docker_config.py > ./powerdnsadmin/docker_config.py
EXPOSE 80/tcp
USER ${USER}
HEALTHCHECK --interval=5s --timeout=5s --start-period=20s --retries=5 CMD wget --output-document=- --quiet --tries=1 http://127.0.0.1${SCRIPT_NAME:-/}
HEALTHCHECK CMD ["wget","--output-document=-","--quiet","--tries=1","http://127.0.0.1/"]
ENTRYPOINT ["entrypoint.sh"]
CMD ["gunicorn","powerdnsadmin:create_app()"]

View File

@ -2,7 +2,7 @@
set -euo pipefail
cd /app
GUNICORN_TIMEOUT="${GUNICORN_TIMEOUT:-120}"
GUNICORN_TIMEOUT="${GUINCORN_TIMEOUT:-120}"
GUNICORN_WORKERS="${GUNICORN_WORKERS:-4}"
GUNICORN_LOGLEVEL="${GUNICORN_LOGLEVEL:-info}"
BIND_ADDRESS="${BIND_ADDRESS:-0.0.0.0:80}"

View File

@ -1,136 +1,105 @@
### API Usage
#### Getting started with docker
1. Run docker image docker-compose up, go to UI http://localhost:9191, at http://localhost:9191/swagger is swagger API specification
2. Click to register user, type e.g. user: admin and password: admin
3. Login to UI in settings enable allow domain creation for users, now you can create and manage domains with admin account and also ordinary users
4. Click on the API Keys menu then click on teh "Add Key" button to add a new Administrator Key
5. Keep the base64 encoded apikey somewhere safe as it won't be available in clear anymore
4. Encode your user and password to base64, in our example we have user admin and password admin so in linux cmd line we type:
#### Accessing the API
PDA has its own API, that should not be confused with the PowerDNS API. Keep in mind that you have to enable PowerDNS API with a key that will be used by PDA to manage it. Therefore, you should use PDA created keys to browse PDA's API, on PDA's adress and port. They don't grant access to PowerDNS' API.
The PDA API consists of two distinct parts:
- The /powerdnsadmin endpoints manages PDA content (accounts, users, apikeys) and also allow domain creation/deletion
- The /server endpoints are proxying queries to the backend PowerDNS instance's API. PDA acts as a proxy managing several API Keys and permissions to the PowerDNS content.
The requests to the API needs two headers:
- The classic 'Content-Type: application/json' is required to all POST and PUT requests, though it's harmless to use it on each call
- The authentication header to provide either the login:password basic authentication or the Api Key authentication.
When you access the `/powerdnsadmin` endpoint, you must use the Basic Auth:
```bash
# Encode your user and password to base64
```
$ echo -n 'admin:admin'|base64
YWRtaW46YWRtaW4=
# Use the ouput as your basic auth header
curl -H 'Authorization: Basic YWRtaW46YWRtaW4=' -X <method> <url>
```
When you access the `/server` endpoint, you must use the ApiKey
we use generated output in basic authentication, we authenticate as user,
with basic authentication, we can create/delete/get zone and create/delete/get/update apikeys
creating domain:
```bash
# Use the already base64 encoded key in your header
curl -H 'X-API-Key: YUdDdGhQM0tMQWV5alpJ' -X <method> <url>
```
Finally, the `/sync_domains` endpoint accepts both basic and apikey authentication
#### Examples
Creating domain via `/powerdnsadmin`:
```bash
curl -L -vvv -H 'Content-Type: application/json' -H 'Authorization: Basic YWRtaW46YWRtaW4=' -X POST http://localhost:9191/api/v1/pdnsadmin/zones --data '{"name": "yourdomain.com.", "kind": "NATIVE", "nameservers": ["ns1.mydomain.com."]}'
```
Creating an apikey which has the Administrator role:
creating apikey which has Administrator role, apikey can have also User role, when creating such apikey you have to specify also domain for which apikey is valid:
```bash
# Create the key
```
curl -L -vvv -H 'Content-Type: application/json' -H 'Authorization: Basic YWRtaW46YWRtaW4=' -X POST http://localhost:9191/api/v1/pdnsadmin/apikeys --data '{"description": "masterkey","domains":[], "role": "Administrator"}'
```
Example response (don't forget to save the plain key from the output)
```json
[
{
"accounts": [],
"description": "masterkey",
"domains": [],
"role": {
"name": "Administrator",
"id": 1
},
"id": 2,
"plain_key": "aGCthP3KLAeyjZI"
}
]
call above will return response like this:
```
[{"description": "samekey", "domains": [], "role": {"name": "Administrator", "id": 1}, "id": 2, "plain_key": "aGCthP3KLAeyjZI"}]
```
We can use the apikey for all calls to PowerDNS (don't forget to specify Content-Type):
we take plain_key and base64 encode it, this is the only time we can get API key in plain text and save it somewhere:
Getting powerdns configuration (Administrator Key is needed):
```
$ echo -n 'aGCthP3KLAeyjZI'|base64
YUdDdGhQM0tMQWV5alpJ
```
```bash
We can use apikey for all calls specified in our API specification (it tries to follow powerdns API 1:1, only tsigkeys endpoints are not yet implemented), don't forget to specify Content-Type!
getting powerdns configuration:
```
curl -L -vvv -H 'Content-Type: application/json' -H 'X-API-KEY: YUdDdGhQM0tMQWV5alpJ' -X GET http://localhost:9191/api/v1/servers/localhost/config
```
Creating and updating records:
creating and updating records:
```bash
```
curl -X PATCH -H 'Content-Type: application/json' --data '{"rrsets": [{"name": "test1.yourdomain.com.","type": "A","ttl": 86400,"changetype": "REPLACE","records": [ {"content": "192.0.2.5", "disabled": false} ]},{"name": "test2.yourdomain.com.","type": "AAAA","ttl": 86400,"changetype": "REPLACE","records": [ {"content": "2001:db8::6", "disabled": false} ]}]}' -H 'X-API-Key: YUdDdGhQM0tMQWV5alpJ' http://127.0.0.1:9191/api/v1/servers/localhost/zones/yourdomain.com.
```
Getting a domain:
getting domain:
```bash
```
curl -L -vvv -H 'Content-Type: application/json' -H 'X-API-KEY: YUdDdGhQM0tMQWV5alpJ' -X GET http://localhost:9191/api/v1/servers/localhost/zones/yourdomain.com
```
List a zone's records:
list zone records:
```bash
```
curl -H 'Content-Type: application/json' -H 'X-API-Key: YUdDdGhQM0tMQWV5alpJ' http://localhost:9191/api/v1/servers/localhost/zones/yourdomain.com
```
Add a new record:
add new record:
```bash
```
curl -H 'Content-Type: application/json' -X PATCH --data '{"rrsets": [ {"name": "test.yourdomain.com.", "type": "A", "ttl": 86400, "changetype": "REPLACE", "records": [ {"content": "192.0.5.4", "disabled": false } ] } ] }' -H 'X-API-Key: YUdDdGhQM0tMQWV5alpJ' http://localhost:9191/api/v1/servers/localhost/zones/yourdomain.com | jq .
```
Update a record:
update record:
```bash
```
curl -H 'Content-Type: application/json' -X PATCH --data '{"rrsets": [ {"name": "test.yourdomain.com.", "type": "A", "ttl": 86400, "changetype": "REPLACE", "records": [ {"content": "192.0.2.5", "disabled": false, "name": "test.yourdomain.com.", "ttl": 86400, "type": "A"}]}]}' -H 'X-API-Key: YUdDdGhQM0tMQWV5alpJ' http://localhost:9191/api/v1/servers/localhost/zones/yourdomain.com | jq .
```
Delete a record:
delete record:
```bash
```
curl -H 'Content-Type: application/json' -X PATCH --data '{"rrsets": [ {"name": "test.yourdomain.com.", "type": "A", "ttl": 86400, "changetype": "DELETE"}]}' -H 'X-API-Key: YUdDdGhQM0tMQWV5alpJ' http://localhost:9191/api/v1/servers/localhost/zones/yourdomain.com | jq
```
### Generate ER diagram
With docker
```bash
# Install build packages
```
apt-get install python-dev graphviz libgraphviz-dev pkg-config
# Get the required python libraries
```
```
pip install graphviz mysqlclient ERAlchemy
# Start the docker container
```
```
docker-compose up -d
# Set environment variables
```
```
source .env
# Generate the diagrams
```
```
eralchemy -i 'mysql://${PDA_DB_USER}:${PDA_DB_PASSWORD}@'$(docker inspect powerdns-admin-mysql|jq -jr '.[0].NetworkSettings.Networks.powerdnsadmin_default.IPAddress')':3306/powerdns_admin' -o /tmp/output.pdf
```

View File

@ -1,74 +0,0 @@
# Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at [admin@powerdnsadmin.org](mailto:admin@powerdnsadmin.org). All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@ -1,107 +0,0 @@
# Contribution Guide
**Looking for help?** Try taking a look at the project's
[Support Guide](https://github.com/PowerDNS-Admin/PowerDNS-Admin/blob/master/.github/SUPPORT.md) or joining
our [Discord Server](https://discord.powerdnsadmin.org).
<div align="center">
<h3>
:bug: <a href="#bug-reporting-bugs">Report a bug</a> &middot;
:bulb: <a href="#bulb-feature-requests">Suggest a feature</a> &middot;
:arrow_heading_up: <a href="#arrow_heading_up-submitting-pull-requests">Submit a pull request</a>
</h3>
<h3>
:rescue_worker_helmet: <a href="#rescue_worker_helmet-become-a-maintainer">Become a maintainer</a> &middot;
:heart: <a href="#heart-other-ways-to-contribute">Other ideas</a>
</h3>
</div>
<h3></h3>
Some general tips for engaging here on GitHub:
* Register for a free [GitHub account](https://github.com/signup) if you haven't already.
* You can use [GitHub Markdown](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) for formatting text and adding images.
* To help mitigate notification spam, please avoid "bumping" issues with no activity. (To vote an issue up or down, use a :thumbsup: or :thumbsdown: reaction.)
* Please avoid pinging members with `@` unless they've previously expressed interest or involvement with that particular issue.
## [Project Update - PLEASE READ!!!](https://github.com/PowerDNS-Admin/PowerDNS-Admin/discussions/1708)
## :bug: Reporting Bugs
* First, ensure that you're running the [latest stable version](https://github.com/PowerDNS-Admin/PowerDNS-Admin/releases) of PDA. If you're running an older version, there's a chance that the bug has already been fixed.
* Next, search our [issues list](https://github.com/PowerDNS-Admin/PowerDNS-Admin/issues?q=is%3Aissue) to see if the bug you've found has already been reported. If you come across a bug report that seems to match, please click "add a reaction" in the top right corner of the issue and add a thumbs up (:thumbsup:). This will help draw more attention to it. Any comments you can add to provide additional information or context would also be much appreciated.
* If you can't find any existing issues (open or closed) that seem to match yours, you're welcome to [submit a new bug report](https://github.com/PowerDNS-Admin/PowerDNS-Admin/issues/new/choose). Be sure to complete the entire report template, including detailed steps that someone triaging your issue can follow to confirm the reported behavior. (If we're not able to replicate the bug based on the information provided, we'll ask for additional detail.)
* Some other tips to keep in mind:
* Error messages and screenshots are especially helpful.
* Don't prepend your issue title with a label like `[Bug]`; the proper label will be assigned automatically.
* Verify that you have GitHub notifications enabled and are subscribed to your issue after submitting.
* We appreciate your patience as bugs are prioritized by their severity, impact, and difficulty to resolve.
## :bulb: Feature Requests
* First, check the GitHub [issues list](https://github.com/PowerDNS-Admin/PowerDNS-Admin/issues?q=is%3Aissue) to see if the feature you have in mind has already been proposed. If you happen to find an open feature request that matches your idea, click "add a reaction" in the top right corner of the issue and add a thumbs up (:thumbsup:). This ensures that the issue has a better chance of receiving attention. Also feel free to add a comment with any additional justification for the feature.
* If you have a rough idea that's not quite ready for formal submission yet, start a [GitHub discussion](https://github.com/PowerDNS-Admin/PowerDNS-Admin/discussions) instead. This is a great way to test the viability and narrow down the scope of a new feature prior to submitting a formal proposal, and can serve to generate interest in your idea from other community members.
* Once you're ready, submit a feature request [using this template](https://github.com/PowerDNS-Admin/PowerDNS-Admin/issues/choose). Be sure to provide sufficient context and detail to convey exactly what you're proposing and why. The stronger your use case, the better chance your proposal has of being accepted.
* Some other tips to keep in mind:
* Don't prepend your issue title with a label like `[Feature]`; the proper label will be assigned automatically.
* Try to anticipate any likely questions about your proposal and provide that information proactively.
* Verify that you have GitHub notifications enabled and are subscribed to your issue after submitting.
* You're welcome to volunteer to implement your FR, but don't submit a pull request until it has been approved.
## :arrow_heading_up: Submitting Pull Requests
* [Pull requests](https://docs.github.com/en/pull-requests) (a feature of GitHub) are used to propose changes to PDA's code base. Our process generally goes like this:
* A user opens a new issue (bug report or feature request)
* A maintainer triages the issue and may mark it as needing an owner
* The issue's author can volunteer to own it, or someone else can
* A maintainer assigns the issue to whomever volunteers
* The issue owner submits a pull request that will resolve the issue
* A maintainer reviews and merges the pull request, closing the issue
* It's very important that you not submit a pull request until a relevant issue has been opened **and** assigned to you. Otherwise, you risk wasting time on work that may ultimately not be needed.
* New pull requests should generally be based off of the `dev` branch, rather than `master`. The `dev` branch is used for ongoing development, while `master` is used for tracking stable releases.
* In most cases, it is not necessary to add a changelog entry: A maintainer will take care of this when the PR is merged. (This helps avoid merge conflicts resulting from multiple PRs being submitted simultaneously.)
* All code submissions should meet the following criteria (CI will eventually enforce these checks):
* Python syntax is valid
* PEP 8 compliance is enforced, with the exception that lines may be
greater than 80 characters in length
* Some other tips to keep in mind:
* If you'd like to volunteer for someone else's issue, please post a comment on that issue letting us know. (This will allow the maintainers to assign it to you.)
* All new functionality must include relevant tests where applicable.
## :rescue_worker_helmet: Become a Maintainer
We're always looking for motivated individuals to join the maintainers team and help drive PDA's long-term development. Some of our most sought-after skills include:
* Python development with a strong focus on the [Flask](https://flask.palletsprojects.com/) and [Django](https://www.djangoproject.com/) frameworks
* Expertise working with SQLite, MySQL, and/or PostgreSQL databases
* Javascript & TypeScript proficiency
* A knack for web application design (HTML & CSS)
* Familiarity with git and software development best practices
* Excellent attention to detail
* Working experience in the field of network operations as it relates to the use of DNS (Domain Name System) servers.
We generally ask that maintainers dedicate around four hours of work to the project each week on average, which includes both hands-on development and project management tasks such as issue triage.
We do maintain an active Mattermost instance for internal communication, but we also use GitHub issues for project management.
Some maintainers petition their employer to grant some of their paid time to work on PDA.
Interested? You can contact our lead maintainer, Matt Scott, at admin@powerdnsadmin.org. We'd love to have you on the team!
## :heart: Other Ways to Contribute
You don't have to be a developer to contribute to PDA: There are plenty of other ways you can add value to the community! Below are just a few examples:
* Help answer questions and provide feedback in our [GitHub discussions](https://github.com/PowerDNS-Admin/PowerDNS-Admin/discussions).
* Write a blog article or record a YouTube video demonstrating how PDA is used at your organization.

View File

@ -1,100 +0,0 @@
# PDA Project Update
## Introduction
Hello PDA community members,
My name is Matt Scott, and I am the owner of [Azorian Solutions](https://azorian.solutions), a consultancy for the
Internet Service Provider (ISP) industry. I'm pleased to announce that I have taken ownership of the PDA project and
will be taking over the lead maintainer role, effective immediately.
Please always remember and thank both [Khanh Ngo](https://github.com/ngoduykhanh) and
[Jérôme Becot](https://github.com/jbe-dw) for their efforts in keeping this project alive thus far. Without the effort
of Khanh creating the PDA project and community, and the efforts of Jérôme for holding up the lead maintainer role after
Khanh had to step down, this project would not still be alive today.
With that being said, please read through all the following announcements as they are important if you're an active PDA
user or community member. I intend to make many great enhancements to the project, but it could be a bumpy road ahead.
### Project Maintenance
As it stands today, contributions to the project are at a low. At this point, there is a rather large backlog of issues
and feature requests in contrast to the current maintenance capacities. This is not to say you should lose hope though!
As part of this project transition, some additional contribution interest has been generated and I expect to attract
more with the changes I'm planning to make. In the near future, I may by-pass some usual maintenance processes in order
to expedite some changes to the project that have been outstanding for some time.
This is to say however that unless the project attracts a healthy new contribution base, issues may continue to pile up
as maintenance capacity is rather limited. This is further complicated by the fact that the current code base is harder
to follow naturally since it largely lacks uniformity and standards. This lack of uniformity has lead to a difficult
situation that makes implementing certain changes less effective. This status quo is not uncommon with projects born how
PDA was born, so it's unfortunate but not unexpected.
### Change of Direction
In order to reorganize the project and get it on a track to a future that allows it to contend with other commercial
quality products, I had to make many considerations to the proficiencies of two unique paths forward to achieve this
goal. One path forward is seemingly obvious, continue maintaining the current code base while overhauling it to shift it
towards the envisioned goal. The other path is a fresh solution design with a complete rebuild.
The answer to the aforementioned decision might seem obvious to those of you who typically favor the "don't reinvent the
wheel" mentality. I'm unclear of the details surrounding the original use-case that drove the development of this
project, but I don't believe it was on-par with some use-cases we see today which include operators handling many tens
of thousands of zones and/or records. There are many changes that have been (sometimes) haphazardly implemented which
has lead to the previously mentioned lack of uniformity among other issues. To put it simply, I'm not sure if the
project ever had a grand vision per se but instead was mostly reactionary to community requests.
I believe that the current project has served the community fairly well from what I can tell. I know the product has
certainly helped me in my professional efforts with many environments. I also believe that it's time to pivot so that
the project can realize it's true potential, considering the existing user base. For this reason, I am beginning the
planning phase of a project overhaul. This effort will involve a complete re-engineering of the project's contribution
standards and requirements, technology stack, and project structure.
This was not an easy decision to come to but one must appreciate that there aren't as many people that can get very
excited about working on the current project code base. The current project has many barriers to entry which I intend to
drastically impact with future changes. The reality is that it's easier to gain contribution participation with a new
build effort as it offers an opportunity to own a part of the project with impactful contributions.
### Project Enhancements
Since this is the beginning of a rebirth of the project so to speak, I want to implement a new operational tactic that
will hopefully drive contributions through incentive. Many of us understand that any project, needs a leader to stay on
track and organized. If everything were a democratic process, it would take too long and suffer unnecessary challenges.
With that being said, I do believe that there is plenty of opportunity through-out various development phases of the
project to allow for a democratic process where the community contributors and members can participate in the
decision-making.
The plan to achieve the aforementioned democratic goal is to centralize communications and define some basic structured
processes. To do this, more effective methods of communication have been implemented to allow those interested in
contributing to easily participate in fluid, open communication. This has already been proving to be quite effective for
exchanging ideas and visions while addressing the issue with contributors living in vastly different time zones. This is
effectively a private chat hosted by the PDA project using Mattermost (a Slack-like alternative).
Even if you aren't in a position to directly contribute work to the project, you can still contribute by participating
in these very important and early discussions that will impact the solution engineering. If the PDA project is an
important tool in your organization, I encourage you to join the conversation and contribute where applicable your
use-cases. Having more insight on the community use-cases will only benefit the future of this project.
If you're interested in joining the conversation, please email me at
[admin@powerdnsadmin.org](mailto:admin@powerdnsadmin.org) for an invitation.
### Re-branding
As part of this project transition, I will also be changing the naming scheme in order to support the future development
efforts toward a newly engineered solution. The current PDA project will ultimately become known as the "PDA Legacy"
application. This change will help facilitate the long-term solution to take the branding position of the existing
solution. Another effort I will be making is to get an app landing page online at the project's new domain:
[powerdnsadmin.org](https://powerdnsadmin.org). This will act as one more point of online exposure for the project which
will hopefully lend itself well to attracting additional community members.
### Contribution Requirements
Another big change that will be made with the new project, will be well-defined contribution requirements. I realize
these requirements can be demotivating for some, but they are a necessary evil to ensure the project actually achieves
its goals effectively. It's important to always remember that strict requirements are to everyone's benefit as they push
for order where chaos is quite destructive.
### Closing
I hope these announcements garner more participation in the PDA community. The project definitely needs more help to
achieve any goal at this point, so your participation is valued!

View File

@ -1,109 +0,0 @@
# PDA Project Update
## Introduction
Hello PDA community members,
I know it has been quite awhile since the last formal announcement like this. Things have been quite busy and difficult
for me both professional and personally. While I try hard to never make my problems someone else's problems, I do
believe it's important to be transparent with the community. I'm not going to go into details, but I will say that I
have been dealing with some mental health issues that have been quite challenging. I'm not one to give up though,
so I'm pushing through and trying to get back on track.
With that being said, let's jump into the announcements.
### Project Maintenance
Granted I haven't been nearly as active on the project as I would like to be, I have been keeping an eye on things and
trying to keep up with the maintenance. I know there are a lot of issues and feature requests that have been piling up,
and I'm sorry for that. Even if I had been more active in recent months, it would have not changed the true root cause
of the issue.
This project was started out of a need for an individual's own use-case. I don't believe it was never intended to be a
commercial quality product nor a community project. It did however gain traction quickly and the community grew. This
is a great thing, but it also comes with some challenges. The biggest challenge is that the project was never designed
to be a community project. This means that the project lacks many of the things that are required to effectively manage
a community project. This is not to say that the project is doomed, but many of the fast-paced changes combined with
the lack of standards has lead to a difficult situation that makes implementing certain changes incredibly unproductive
and quite often, entirely counter-productive.
After many years of accepting contributions from those who are not professional developers, the project has become quite
difficult to maintain. This is not to say that I don't appreciate the contributions, but it's important to understand
that the state of the code-base for the project is not in a good place. This is not uncommon with projects born how PDA
was born, so it's unfortunate but not unexpected.
As of today, there are so many dependencies and a large amount of very poorly implemented features that it's difficult
to make any changes without breaking many other pieces. This is further complicated by the fact that the current code
base is harder to follow naturally since it largely lacks uniformity and standards. This lack of uniformity has lead to
a situation where automated regression testing is not possible. This is a very important aspect of any project that
expects to be able to make changes without breaking things. This is also a very important aspect of any project that
expects to be able to accept contributions from the community with minimum management resources.
The hard reality is that the majority of stakeholders in the project are not professional developers. This naturally
means the amount of people that can offer quality contributions is very limited. This problem is further aggravated by
the poor quality feature implementation which is very hard to follow, even for seasoned developers like myself. So many
seemingly small issues that have been reported, have lead to finding that the resolution is not as simple as it seems.
### New Direction
As I previously stated in my last formal announcement, we would be working towards a total replacement of the project.
Unfortunately, this is not a simple task, and it's not something that can be done quickly. Furthermore, with
increasingly limited capacity in our own lives to work on this, we are essentially drowning in a sea of technical debt
created by the past decisions of the project to accept all contributions. We have essentially reached a point where
far too much time and resources are being wasted just to attempt to meet the current demand of requests on the current
edition of PDA. This is a tragedy because the efforts that are invested into the current edition, really aren't
creating true progress for the project, but instead merely delaying the inevitable.
As I have stated before to many community members, one aspect of taking over management of this project to ultimately
save it and keep it alive, would involve making hard decisions that many will not agree with. It's unfortunate that
many of those who are less than supportive of these decisions, often lack the appropriate experience to understand the
importance of these decisions. I'm not saying that I'm always right, but I am saying that it's not hard to see where
this is headed without some drastic changes.
With all of that being said, it's time for me to make some hard decisions. I have decided that the best course of
action is to stop accepting contributions to the current edition of PDA. At this point, due to the aforementioned
issues that lead to breaking the application with seemingly simple changes, it's just not worth the effort to try to
keep up with the current edition. This is not to say that I'm giving up on the project, but instead I'm going to
re-focus my efforts on the new edition of PDA. This is the only way to ensure that the project will survive and
hopefully thrive in the future.
I will not abandon the current set of updates that were planned for the next release of `0.4.2` however. I have
re-scheduled that release to be out by the end of the year. This will be the last release of the current edition of
PDA. The consensus from some users is that the current edition is stable enough to be used in production environments.
I don't necessarily agree with that, but I do believe that it's stable enough to be used in production
environments with the understanding that it's not a commercial quality product.
### Future Contributions
For those of you wondering about contributions to the new edition of PDA, the answer for now is simple. I won't be
accepting any contributions to the new edition until I can achieve a stable release that delivers the core features of
the current edition. This is not to say that I won't be accepting any contributions at all, but instead that I will be
very selective about what contributions I accept. I believe this is the only way to ensure that a solid foundation not
only takes shape, but remains solid.
It is well understood that many developers have their own ways of doing things, but it's important to understand
that this project is not a personal project. This project is a community project and therefore must be treated as such.
This means that the project must be engineered in a way that allows for the community to participate in the development
process. This is not possible if the project is not engineered in a way that is easy to follow and understand.
### Project Enhancements
It should be understood that one of the greatest benefits of this pivot is that it will allow for a more structured
development process. As a result of that, the project could potentially see a future where it adopts a whole new set of
features that weren't previously imagined. One prime example of this could be integration with registrar APIs. This
could make easy work of tasks such as DNSSEC key rotation, which is currently a very manual process.
I am still working on final project requirements for additional phases of the new PDA edition, but these additions
won't receive any attention until the core features are implemented. I will be sure to make announcements as these
requirements are finalized. It is my intention to follow a request for proposal (RFP) process for these additional
features. This will allow the community to participate in the decision-making process for future expansion of the
project.
### Closing
I hope that by the time you have reached this point in the announcement, that I have elicited new hope for the
long-term future of the project. I know that many of you have been waiting for a long time for some of the features that have been
requested. I know that many of you have been waiting for a long time for some of the issues to be resolved, for
requested features to be implemented, and for the project to be more stable. It's unfortunate that it has taken this
long to get to this point, but this is the nature of life itself. I hope that you can understand that this is the only
reasonable gamble that the project survives and thrives in the future.

View File

@ -18,83 +18,3 @@ Now you can enable the OAuth in PowerDNS-Admin.
* Restart PowerDNS-Admin
This should allow you to log in using OAuth.
#### Keycloak
To link to Keycloak for authentication, you need to create a new client in the Keycloak Administration Console.
* Log in to the Keycloak Administration Console
* Go to Clients > Create
* Enter a Client ID (for example 'powerdns-admin') and click 'Save'
* Scroll down to 'Access Type' and choose 'Confidential'.
* Scroll down to 'Valid Redirect URIs' and enter 'https://<pdnsa address>/oidc/authorized'
* Click 'Save'
* Go to the 'Credentials' tab and copy the Client Secret
* Log in to PowerDNS-Admin and go to 'Settings > Authentication > OpenID Connect OAuth'
* Enter the following details:
* Client key -> Client ID
* Client secret > Client secret copied from keycloak
* Scope: `profile`
* API URL: https://<keycloak url>/auth/realms/<realm>/protocol/openid-connect/
* Token URL: https://<keycloak url>/auth/realms/<realm>/protocol/openid-connect/token
* Authorize URL: https://<keycloak url>/auth/realms/<realm>/protocol/openid-connect/auth
* Logout URL: https://<keycloak url>/auth/realms/<realm>/protocol/openid-connect/logout
* Leave the rest default
* Save the changes and restart PowerDNS-Admin
* Use the new 'Sign in using OpenID Connect' button to log in.
#### OpenID Connect OAuth
To link to oidc service for authenticationregister your PowerDNS-Admin in the OIDC Provider. This requires your PowerDNS-Admin web interface to use an HTTPS URL.
Enable OpenID Connect OAuth option.
* Client key, The client ID
* Scope, The scope of the data.
* API URL, <oidc_provider_link>/auth (The ending can be different with each provider)
* Token URL, <oidc_provider_link>/token
* Authorize URL, <oidc_provider_link>/auth
* Metadata URL, <oidc_provider_link>/.well-known/openid-configuration
* Logout URL, <oidc_provider_link>/logout
* Username, This will be the claim that will be used as the username. (Usually preferred_username)
* First Name, This will be the firstname of the user. (Usually given_name)
* Last Name, This will be the lastname of the user. (Usually family_name)
* Email, This will be the email of the user. (Usually email)
#### To create accounts on oidc login use the following properties:
* Autoprovision Account Name Property, This property will set the name of the created account.
This property can be a string or a list.
* Autoprovision Account Description Property, This property will set the description of the created account.
This property can be a string or a list.
If we get a variable named "groups" and "groups_description" from our IdP.
This variable contains groups that the user is a part of.
We will put the variable name "groups" in the "Name Property" and "groups_description" in the "Description Property".
This will result in the following account being created:
Input we get from the Idp:
```
{
"preferred_username": "example_username",
"given_name": "example_firstame",
"family_name": "example_lastname",
"email": "example_email",
"groups": ["github", "gitlab"]
"groups_description": ["github.com", "gitlab.com"]
}
```
The user properties will be:
```
Username: customer_username
First Name: customer_firstame
Last Name: customer_lastname
Email: customer_email
Role: User
```
The groups properties will be:
```
Name: github Description: github.com Members: example_username
Name: gitlab Description: gitlab.com Members: example_username
```
If the option "delete_sso_accounts" is turned on the user will only be apart of groups the IdP provided and removed from all other accoubnts.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 196 KiB

View File

@ -1,50 +0,0 @@
# PowerDNS-Admin wiki
## Database Setup guides
- [MySQL / MariaDB](database-setup/Setup-MySQL-or-MariaDB.md)
- [PostgreSQL](database-setup/Setup-PostgreSQL.md)
## Installation guides
- [General (Read this first)](install/General.md)
- BSD:
- [Install on FreeBSD 12.1-RELEASE](install/Running-on-FreeBSD.md)
- Containers:
- [Install on Docker](install/Running-PowerDNS-Admin-on-Docker.md)
- Debian:
- [Install on Ubuntu or Debian](install/Running-PowerDNS-Admin-on-Ubuntu-or-Debian.md)
- Red-Hat:
- [Install on Centos 7](install/Running-PowerDNS-Admin-on-Centos-7.md)
- [Install on Fedora 23](install/Running-PowerDNS-Admin-on-Fedora-23.md)
- [Install on Fedora 30](install/Running-PowerDNS-Admin-on-Fedora-30.md)
### Post install Setup
- [Environment Variables](configuration/Environment-variables.md)
- [Getting started](configuration/Getting-started.md)
- SystemD:
- [Running PowerDNS-Admin as a service using Systemd](install/Running-PowerDNS-Admin-as-a-service-(Systemd).md)
### Web Server configuration
- [Supervisord](web-server/Supervisord-example.md)
- [Systemd](web-server/Systemd-example.md)
- [Systemd + Gunicorn + Nginx](web-server/Running-PowerDNS-Admin-with-Systemd-Gunicorn-and-Nginx.md)
- [Systemd + Gunicorn + Apache](web-server/Running-PowerDNS-Admin-with-Systemd,-Gunicorn-and-Apache.md)
- [uWSGI](web-server/uWSGI-example.md)
- [WSGI-Apache](web-server/WSGI-Apache-example.md)
- [Docker-ApacheReverseProxy](web-server/Running-Docker-Apache-Reverseproxy.md)
## Using PowerDNS-Admin
- Setting up a zone
- Adding a record
## Feature usage
- [DynDNS2](features/DynDNS2.md)
## Debugging
- [Debugging the build process](debug/build-process.md)

View File

@ -1,34 +0,0 @@
Active Directory Setup - Tested with Windows Server 2012
1) Login as an admin to PowerDNS Admin
2) Go to Settings --> Authentication
3) Under Authentication, select LDAP
4) Click the Radio Button for Active Directory
5) Fill in the required info -
* LDAP URI - ldap://ip.of.your.domain.controller:389
* LDAP Base DN - dc=yourdomain,dc=com
* Active Directory domain - yourdomain.com
* Basic filter - (objectCategory=person)
* the brackets here are **very important**
* Username field - sAMAccountName
* GROUP SECURITY - Status - On
* Admin group - CN=Your_AD_Admin_Group,OU=Your_AD_OU,DC=yourdomain,DC=com
* Operator group - CN=Your_AD_Operator_Group,OU=Your_AD_OU,DC=yourdomain,DC=com
* User group - CN=Your_AD_User_Group,OU=Your_AD_OU,DC=yourdomain,DC=com
6) Click Save
7) Logout and re-login as an LDAP user from each of the above groups.
If you're having problems getting the correct information for your groups, the following tool can be useful -
https://docs.microsoft.com/en-us/sysinternals/downloads/adexplorer
In our testing, groups with spaces in the name did not work, we had to create groups with underscores to get everything operational.
YMMV

View File

@ -1,65 +0,0 @@
# Supported environment variables
| Variable | Description | Required | Default value |
|--------------------------------|--------------------------------------------------------------------------|------------|---------------|
| BIND_ADDRESS |
| CSRF_COOKIE_SECURE |
| SESSION_TYPE | null | filesystem | sqlalchemy | | filesystem |
| LDAP_ENABLED |
| LOCAL_DB_ENABLED |
| LOG_LEVEL |
| MAIL_DEBUG |
| MAIL_DEFAULT_SENDER |
| MAIL_PASSWORD |
| MAIL_PORT |
| MAIL_SERVER |
| MAIL_USERNAME |
| MAIL_USE_SSL |
| MAIL_USE_TLS |
| OFFLINE_MODE |
| OIDC_OAUTH_API_URL | | | |
| OIDC_OAUTH_AUTHORIZE_URL |
| OIDC_OAUTH_TOKEN_URL | | | |
| OIDC_OAUTH_METADATA_URL | | | |
| PORT |
| SERVER_EXTERNAL_SSL | Forceful override of URL schema detection when using the url_for method. | False | None |
| REMOTE_USER_COOKIES |
| REMOTE_USER_LOGOUT_URL |
| SALT |
| SAML_ASSERTION_ENCRYPTED |
| SAML_ATTRIBUTE_ACCOUNT |
| SAML_ATTRIBUTE_ADMIN |
| SAML_ATTRIBUTE_EMAIL |
| SAML_ATTRIBUTE_GIVENNAME |
| SAML_ATTRIBUTE_GROUP |
| SAML_ATTRIBUTE_NAME |
| SAML_ATTRIBUTE_SURNAME |
| SAML_ATTRIBUTE_USERNAME |
| SAML_CERT |
| SAML_DEBUG |
| SAML_ENABLED |
| SAML_GROUP_ADMIN_NAME |
| SAML_GROUP_TO_ACCOUNT_MAPPING |
| SAML_IDP_SSO_BINDING |
| SAML_IDP_ENTITY_ID |
| SAML_KEY |
| SAML_LOGOUT |
| SAML_LOGOUT_URL |
| SAML_METADATA_CACHE_LIFETIME |
| SAML_METADATA_URL |
| SAML_NAMEID_FORMAT |
| SAML_PATH |
| SAML_SIGN_REQUEST |
| SAML_SP_CONTACT_MAIL |
| SAML_SP_CONTACT_NAME |
| SAML_SP_ENTITY_ID |
| SAML_WANT_MESSAGE_SIGNED |
| SECRET_KEY | Flask secret key [^1] | Y | no default |
| SESSION_COOKIE_SECURE |
| SIGNUP_ENABLED |
| SQLALCHEMY_DATABASE_URI | SQL Alchemy URI to connect to database | N | no default |
| SQLALCHEMY_TRACK_MODIFICATIONS |
| SQLALCHEMY_ENGINE_OPTIONS | json string. e.g. '{"pool_recycle":600,"echo":1}' [^2] |
[^1]: Flask secret key (see https://flask.palletsprojects.com/en/1.1.x/config/#SECRET_KEY for how to generate)
[^2]: See Flask-SQLAlchemy Documentation for all engine options.

View File

@ -1,16 +0,0 @@
# Getting started with PowerDNS-Admin
In your FLASK_CONF (check the installation directions for where yours is) file, make sure you have the database URI filled in (in some previous documentation this was called config.py):
For MySQL / MariaDB:
```
SQLALCHEMY_DATABASE_URI = 'mysql://username:password@127.0.0.1/db_name'
```
For Postgres:
```
SQLALCHEMY_DATABASE_URI = 'postgresql://powerdnsadmin:powerdnsadmin@127.0.0.1/powerdnsadmindb'
```
Open your web browser and go to `http://localhost:9191` to visit PowerDNS-Admin web interface. Register a user. The first user will be in the Administrator role.

View File

@ -1,17 +0,0 @@
### PowerDNSAdmin basic settings
PowerDNSAdmin has many features and settings available to be turned either off or on.
In this docs those settings will be explain.
To find the settings in the the dashboard go to settings>basic.
allow_user_create_domain: This setting is used to allow users with the `user` role to create a domain, not possible by
default.
allow_user_remove_domain: Same as `allow_user_create_domain` but for removing a domain.
allow_user_view_history: Allow a user with the role `user` to view and access the history.
custom_history_header: This is a string type variable, when inputting an header name, if exists in the request it will
be in the created_by column in the history, if empty or not mentioned will default to the api_key description.
site_name: This will be the site name.

View File

@ -1,4 +0,0 @@
# Database setup guides
- [MySQL / MariaDB](Setup-MySQL-or-MariaDB.md)
- [PostgreSQL](Setup-PostgreSQL.md)

View File

@ -1,56 +0,0 @@
# Setup MySQL database for PowerDNS-Admin
This guide will show you how to prepare a MySQL or MariaDB database for PowerDNS-Admin.
We assume the database is installed per your platform's directions (apt, yum, etc). Directions to do this can be found below:
- MariaDB:
- https://mariadb.com/kb/en/getting-installing-and-upgrading-mariadb/
- https://www.digitalocean.com/community/tutorials/how-to-install-mariadb-on-ubuntu-20-04
- MySQL:
- https://dev.mysql.com/downloads/mysql/
- https://www.digitalocean.com/community/tutorials/how-to-install-mysql-on-ubuntu-20-04
The following directions assume a default configuration and for productions setups `mysql_secure_installation` has been run.
## Setup database:
Connect to the database (Usually using `mysql -u root -p` if a password has been set on the root database user or `sudo mysql` if not), then enter the following:
```
CREATE DATABASE `powerdnsadmin` CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
GRANT ALL PRIVILEGES ON `powerdnsadmin`.* TO 'pdnsadminuser'@'localhost' IDENTIFIED BY 'YOUR_PASSWORD_HERE';
FLUSH PRIVILEGES;
```
- If your database server is located on a different machine then change 'localhost' to '%'
- Replace YOUR_PASSWORD_HERE with a secure password.
Once there are no errors you can type `quit` in the mysql shell to exit from it.
## Install required packages:
### Red-hat based systems:
```
yum install MariaDB-shared mariadb-devel mysql-community-devel
```
### Debian based systems:
```
apt install libmysqlclient-dev
```
### Install python packages:
```
pip3 install mysqlclient==2.0.1
```
## Known issues:
Problem: If you plan to manage large zones, you may encounter some issues while applying changes. This is due to PowerDNS-Admin trying to insert the entire modified zone into the column history.detail.
Using MySQL/MariaDB, this column is created by default as TEXT and thus limited to 65,535 characters.
Solution: Convert the column to MEDIUMTEXT:
1. Connect to the database shell as described in the setup database section:
2. Execute the following commands:
```
USE powerdnsadmin;
ALTER TABLE history MODIFY detail MEDIUMTEXT;
```

View File

@ -1,79 +0,0 @@
# Setup Postgres database for PowerDNS-Admin
This guide will show you how to prepare a PostgreSQL database for PowerDNS-Admin.
We assume the database is installed per your platform's directions (apt, yum, etc). Directions to do this can be found below:
- https://www.postgresql.org/download/
- https://www.digitalocean.com/community/tutorials/how-to-install-postgresql-on-ubuntu-22-04-quickstart
We assume a default configuration and only the postgres user existing.
## Setup database
The below will create a database called powerdnsadmindb and a user of powerdnsadmin.
```
$ sudo su - postgres
$ createuser powerdnsadmin
$ createdb -E UTF8 -l en_US.UTF-8 -O powerdnsadmin -T template0 powerdnsadmindb 'The database for PowerDNS-Admin'
$ psql
postgres=# ALTER ROLE powerdnsadmin WITH PASSWORD 'powerdnsadmin_password';
```
Note:
- Please change the information above (db, user, password) to fit your setup.
### Setup Remote access to database:
If your database is on a different server postgres does not allow remote connections by default.
To change this follow the below directions:
```
[root@host ~]$ sudo su - postgres
# Edit /var/lib/pgsql/data/postgresql.conf
# Change the following line:
listen_addresses = 'localhost'
# to:
listen_addresses = '*'
# Edit /var/lib/pgsql/data/pg_hba.conf
# Add the following lines to the end of the
host all all 0.0.0.0/0 md5
host all all ::/0 md5
[postgres@host ~]$ exit
[root@host ~]$ sudo systemctl restart postgresql
```
On debian based systems these files are located in:
```
/etc/postgresql/<version>/main/
```
## Install required packages:
### Red-hat based systems:
TODO: confirm this is correct
```
sudo yum install postgresql-libs
```
### Debian based systems:
```
apt install python3-psycopg2
```
## Known Issues:
** To fill in **
## Docker (TODO: to move to docker docs)
TODO: Setup a local Docker postgres database ready to go (should probably move to the top).
```
docker run --name pdnsadmin-test -e BIND_ADDRESS=0.0.0.0
-e SECRET_KEY='a-very-secret-key'
-e PORT='9191'
-e SQLA_DB_USER='powerdns_admin_user'
-e SQLA_DB_PASSWORD='exceptionallysecure'
-e SQLA_DB_HOST='192.168.0.100'
-e SQLA_DB_NAME='powerdns_admin_test'
-v /data/node_modules:/var/www/powerdns-admin/node_modules -d -p 9191:9191 ixpict/powerdns-admin-pgsql:latest
```

View File

@ -1,61 +0,0 @@
This discribes how to debug the buildprocess
docker-compose.yml
```
version: "3"
services:
app:
image: powerdns/custom
container_name: powerdns
restart: always
build:
context: git
dockerfile: docker/Dockerfile
network_mode: "host"
logging:
driver: json-file
options:
max-size: 50m
environment:
- BIND_ADDRESS=127.0.0.1:8082
- SECRET_KEY='VerySecret'
- SQLALCHEMY_DATABASE_URI=mysql://pdnsadminuser:password@127.0.0.1/powerdnsadmin
- GUNICORN_TIMEOUT=60
- GUNICORN_WORKERS=2
- GUNICORN_LOGLEVEL=DEBUG
- OFFLINE_MODE=False
- CSRF_COOKIE_SECURE=False
```
Create a git folder in the location of the `docker-compose.yml` and clone the repo into it
```
mkdir git
cd git
git clone https://github.com/PowerDNS-Admin/PowerDNS-Admin.git .
```
In case you are behind an SSL Filter like me, you can add the following to each stage of the `git/docker/Dockerfile`
This installs the command `update-ca-certificates` from the alpine repo and adds an ssl cert to the trust chain, make sure you are getting the right version in case the base image version changes
```
RUN mkdir /tmp-pkg && cd /tmp-pkg && wget http://dl-cdn.alpinelinux.org/alpine/v3.17/main/x86_64/ca-certificates-20220614-r4.apk && apk add --allow-untrusted --no-network --no-cache /tmp-pkg/ca-certificates-20220614-r4.apk || true
RUN rm -rf /tmp/pkg
COPY MyCustomCerts.crt /usr/local/share/ca-certificates/MyCustomCerts.crt
RUN update-ca-certificates
COPY pip.conf /etc/pip.conf
```
`MyCustomCerts.crt` and `pip.conf` have to be placed inside the `git` folder.
The content of `pip.conf` is:
```
[global]
cert = /usr/local/share/ca-certificates/MyCustomCerts.crt
```
For easier debugging you can change the `CMD` of the `Dockerfile` to `CMD ["tail","-f", "/dev/null"]` though I expect you to be fluent in Docker in case you wish to debug

View File

@ -1,16 +0,0 @@
Usage:
IPv4: http://user:pass@yournameserver.yoursite.tld/nic/update?hostname=record.domain.tld&myip=127.0.0.1
IPv6: http://user:pass@yournameserver.yoursite.tld/nic/update?hostname=record.domain.tld&myip=::1
Multiple IPs: http://user:pass@yournameserver.yoursite.tld/nic/update?hostname=record.domain.tld&myip=127.0.0.1,127.0.0.2,::1,::2
Notes:
- user needs to be a LOCAL user, not LDAP etc
- user must have already logged-in
- user needs to be added to Domain Access Control list of domain.tld - admin status (manage all) does not suffice
- record has to exist already - unless on-demand creation is allowed
- ipv4 address in myip field will change A record
- ipv6 address in myip field will change AAAA record
- use commas to separate multiple IP addresses in the myip field, mixing v4 & v6 is allowed
DynDNS also works without authentication header (user:pass@) when already authenticated via session cookie from /login, even with external auth like LDAP.
However Domain Access Control restriction still applies.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.4 KiB

View File

@ -1,32 +0,0 @@
# General installation
## PowerDNS-Admin Architecture
![PowerDNS-Admin Component Layout](Architecture.png)
A PowerDNS-Admin installation includes four main components:
- PowerDNS-Admin Database
- PowerDNS-Admin Application Server
- PowerDNS-Admin Frontend Web server
- PowerDNS server that
All 3 components can be installed on one server or if your installation is large enough or for security reasons can be split across multiple servers.
## Requirements for PowerDNS-Admin:
- A linux based system. Others (Arch-based for example) may work but are currently not tested.
- Ubuntu versions tested:
- To fill in
- Red hat versions tested:
- To fill in
- Python versions tested:
- 3.6
- 3.7
- 3.8
- 3.9
- 3.10
- 3.11 - Failing due to issue with python3-saml later than 1.12.0
- A database for PowerDNS-Admin, if you are using a database for PowerDNS itself this must be separate to that database. The currently supported databases are:
- MySQL
- PostgreSQL
- SQLite
- A PowerDNS server that PowerDNS-Admin will manage.

View File

@ -1,72 +0,0 @@
***
**WARNING**
This just uses the development server for testing purposes. For production environments you should probably go with a more robust solution, like [gunicorn](web-server/Running-PowerDNS-Admin-with-Systemd,-Gunicorn--and--Nginx.md) or a WSGI server.
***
### Following example shows a systemd unit file that can run PowerDNS-Admin
You shouldn't run PowerDNS-Admin as _root_, so let's start of with the user/group creation that will later run PowerDNS-Admin:
Create a new group for PowerDNS-Admin:
> sudo groupadd powerdnsadmin
Create a user for PowerDNS-Admin:
> sudo useradd --system -g powerdnsadmin powerdnsadmin
_`--system` creates a user without login-shell and password, suitable for running system services._
Create new systemd service file:
> sudo vim /etc/systemd/system/powerdns-admin.service
General example:
```
[Unit]
Description=PowerDNS-Admin
After=network.target
[Service]
Type=simple
User=powerdnsadmin
Group=powerdnsadmin
ExecStart=/opt/web/powerdns-admin/flask/bin/python ./run.py
WorkingDirectory=/opt/web/powerdns-admin
Restart=always
[Install]
WantedBy=multi-user.target
```
Debian example:
```
[Unit]
Description=PowerDNS-Admin
After=network.target
[Service]
Type=simple
User=powerdnsadmin
Group=powerdnsadmin
Environment=PATH=/opt/web/powerdns-admin/flask/bin
ExecStart=/opt/web/powerdns-admin/flask/bin/python /opt/web/powerdns-admin/run.py
WorkingDirectory=/opt/web/powerdns-admin
Restart=always
[Install]
WantedBy=multi-user.target
```
Before starting the service, we need to make sure that the new user can work on the files in the PowerDNS-Admin folder:
> chown -R powerdnsadmin:powerdnsadmin /opt/web/powerdns-admin
After saving the file, we need to reload the systemd daemon:
> sudo systemctl daemon-reload
We can now try to start the service:
> sudo systemctl start powerdns-admin
If you would like to start PowerDNS-Admin automagically at startup enable the service:
> systemctl enable powerdns-admin
Should the service not be up by now, consult your syslog. Generally this will be a file permission issue, or python not finding it's modules. See the Debian unit example to see how you can use systemd in a python `virtualenv`

View File

@ -1,83 +0,0 @@
# Installing PowerDNS-Admin on CentOS 7
```
NOTE: If you are logged in as User and not root, add "sudo", or get root by sudo -i.
```
## Install required packages:
### Install needed repositories:
```
yum install epel-release
yum install https://repo.ius.io/ius-release-el7.rpm https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
```
### Install Python 3.6 and tools:
First remove python 3.4 if installed
```
yum remove python34*
yum autoremove
```
```
yum install python3 python3-devel python3-pip
pip3.6 install -U pip
pip install -U virtualenv
```
### Install required packages for building python libraries from requirements.txt file:
```
yum install gcc openldap-devel xmlsec1-devel xmlsec1-openssl libtool-ltdl-devel
```
### Install yarn to build asset files + Nodejs 14:
```
curl -sL https://rpm.nodesource.com/setup_14.x | bash -
curl -sL https://dl.yarnpkg.com/rpm/yarn.repo -o /etc/yum.repos.d/yarn.repo
yum install yarn
```
### Checkout source code and create virtualenv:
NOTE: Please adjust `/opt/web/powerdns-admin` to your local web application directory
```
git clone https://github.com/PowerDNS-Admin/PowerDNS-Admin.git /opt/web/powerdns-admin
cd /opt/web/powerdns-admin
virtualenv -p python3 flask
```
Activate your python3 environment and install libraries:
```
. ./flask/bin/activate
pip install python-dotenv
pip install -r requirements.txt
```
## Running PowerDNS-Admin:
NOTE: The default config file is located at `./powerdnsadmin/default_config.py`. If you want to load another one, please set the `FLASK_CONF` environment variable. E.g.
```bash
export FLASK_CONF=../configs/development.py
```
### Create the database schema:
```
export FLASK_APP=powerdnsadmin/__init__.py
flask db upgrade
```
**Also, we should generate asset files:**
```
yarn install --pure-lockfile
flask assets build
```
**Now you can run PowerDNS-Admin by command:**
```
./run.py
```
Open your web browser and access to `http://localhost:9191` to visit PowerDNS-Admin web interface. Register an user. The first user will be in Administrator role.
At the first time you login into the PDA UI, you will be redirected to setting page to configure the PDNS API information.
_**Note:**_ For production environment, i would recommend you to run PowerDNS-Admin with gunicorn or uwsgi instead of flask's built-in web server, take a look at WIKI page to see how to configure them.

View File

@ -1,14 +0,0 @@
# Installation on docker
The Docker image is powerdnsadmin/pda-legacy available on [DockerHub](https://hub.docker.com/r/powerdnsadmin/pda-legacy)
The supported environment variables to configure the container are located [here](../configuration/Environment-variables.md).
You can run the container and expose the web server on port 9191 using:
```bash
docker run -d \
-e SECRET_KEY='a-very-secret-key' \
-v pda-data:/data \
-p 9191:80 \
powerdnsadmin/pda-legacy:latest
```

View File

@ -1 +0,0 @@
Please refer to CentOS guide: [Running-PowerDNS-Admin-on-Centos-7](Running-PowerDNS-Admin-on-Centos-7.md)

View File

@ -1,82 +0,0 @@
```
NOTE: If you are logged in as User and not root, add "sudo", or get root by sudo -i.
Normally under centos you are anyway mostly root.
```
<br>
## Install required packages
**Install Python and requirements**
```bash
dnf install python37 python3-devel python3-pip
```
**Install Backend and Environment prerequisites**
```bash
dnf install mariadb-devel mariadb-common openldap-devel xmlsec1-devel xmlsec1-openssl libtool-ltdl-devel
```
**Install Development tools**
```bash
dnf install gcc gc make
```
**Install PIP**
```bash
pip3.7 install -U pip
```
**Install Virtual Environment**
```bash
pip install -U virtualenv
```
**Install Yarn for building NodeJS asset files:**
```bash
dnf install npm
npm install yarn -g
```
## Clone the PowerDNS-Admin repository to the installation path:
```bash
cd /opt/web/
git clone https://github.com/PowerDNS-Admin/PowerDNS-Admin.git powerdns-admin
```
**Prepare the Virtual Environment:**
```bash
cd /opt/web/powerdns-admin
virtualenv -p python3 flask
```
**Activate the Python Environment and install libraries**
```bash
. ./flask/bin/activate
pip install python-dotenv
pip install -r requirements.txt
```
## Running PowerDNS-Admin
NOTE: The default config file is located at `./powerdnsadmin/default_config.py`. If you want to load another one, please set the `FLASK_CONF` environment variable. E.g.
```bash
export FLASK_CONF=../configs/development.py
```
**Then create the database schema by running:**
```
(flask) [khanh@localhost powerdns-admin] export FLASK_APP=powerdnsadmin/__init__.py
(flask) [khanh@localhost powerdns-admin] flask db upgrade
```
**Also, we should generate asset files:**
```
(flask) [khanh@localhost powerdns-admin] yarn install --pure-lockfile
(flask) [khanh@localhost powerdns-admin] flask assets build
```
**Now you can run PowerDNS-Admin by command:**
```
(flask) [khanh@localhost powerdns-admin] ./run.py
```
Open your web browser and access to `http://localhost:9191` to visit PowerDNS-Admin web interface. Register an user. The first user will be in Administrator role.
At the first time you login into the PDA UI, you will be redirected to setting page to configure the PDNS API information.
_**Note:**_ For production environment, i recommend to run PowerDNS-Admin with WSGI over Apache instead of flask's built-in web server...
Take a look at [WSGI Apache Example](web-server/WSGI-Apache-example#fedora) WIKI page to see how to configure it.

View File

@ -1,90 +0,0 @@
# Installing PowerDNS-Admin on Ubuntu or Debian based systems
First setup your database accordingly:
[Database Setup](../database-setup/README.md)
## Install required packages:
### Install required packages for building python libraries from requirements.txt file
For Debian 11 (bullseye) and above:
```bash
sudo apt install -y python3-dev git libsasl2-dev libldap2-dev python3-venv libmariadb-dev pkg-config build-essential curl libpq-dev
```
Older systems might also need the following:
```bash
sudo apt install -y libssl-dev libxml2-dev libxslt1-dev libxmlsec1-dev libffi-dev apt-transport-https virtualenv
```
### Install NodeJs
```bash
curl -sL https://deb.nodesource.com/setup_14.x | sudo bash -
sudo apt install -y nodejs
```
### Install yarn to build asset files
For Debian 11 (bullseye) and above:
```bash
curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | sudo tee /usr/share/keyrings/yarnkey.gpg >/dev/null
echo "deb [signed-by=/usr/share/keyrings/yarnkey.gpg] https://dl.yarnpkg.com/debian stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
sudo apt update && sudo apt install -y yarn
```
For older Debian systems:
```bash
sudo curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
sudo apt update -y
sudo apt install -y yarn
```
### Checkout source code and create virtualenv
_**Note:**_ Please adjust `/opt/web/powerdns-admin` to your local web application directory
```bash
git clone https://github.com/PowerDNS-Admin/PowerDNS-Admin.git /opt/web/powerdns-admin
cd /opt/web/powerdns-admin
python3 -mvenv ./venv
```
Activate your python3 environment and install libraries:
```bash
source ./venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
```
## Running PowerDNS-Admin
Create PowerDNS-Admin config file and make the changes necessary for your use case. Make sure to change `SECRET_KEY` to a long random string that you generated yourself ([see Flask docs](https://flask.palletsprojects.com/en/1.1.x/config/#SECRET_KEY)), do not use the pre-defined one. E.g.:
```bash
cp /opt/web/powerdns-admin/configs/development.py /opt/web/powerdns-admin/configs/production.py
vim /opt/web/powerdns-admin/configs/production.py
export FLASK_CONF=../configs/production.py
```
Do the DB migration
```bash
export FLASK_APP=powerdnsadmin/__init__.py
flask db upgrade
```
Then generate asset files
```bash
yarn install --pure-lockfile
flask assets build
```
Now you can run PowerDNS-Admin by command
```bash
./run.py
```
This is good for testing, but for production usage, you should use gunicorn or uwsgi. See [Running PowerDNS Admin with Systemd, Gunicorn and Nginx](../web-server/Running-PowerDNS-Admin-with-Systemd-Gunicorn-and-Nginx.md) for instructions.
From here you can now follow the [Getting started guide](../configuration/Getting-started.md).

View File

@ -1,102 +0,0 @@
On [FreeBSD](https://www.freebsd.org/), most software is installed using `pkg`. You can always build from source with the Ports system. This method uses as many binary ports as possible, and builds some python packages from source. It installs all the required runtimes in the global system (e.g., python, node, yarn) and then builds a virtual python environment in `/opt/python`. Likewise, it installs powerdns-admin in `/opt/powerdns-admin`.
### Build an area to host files
```bash
mkdir -p /opt/python
```
### Install prerequisite runtimes: python, node, yarn
```bash
sudo pkg install git python3 curl node12 yarn-node12
sudo pkg install libxml2 libxslt pkgconf py37-xmlsec py37-cffi py37-ldap
```
## Check Out Source Code
_**Note:**_ Please adjust `/opt/powerdns-admin` to your local web application directory
```bash
git clone https://github.com/PowerDNS-Admin/PowerDNS-Admin.git /opt/powerdns-admin
cd /opt/powerdns-admin
```
## Make Virtual Python Environment
Make a virtual environment for python. Activate your python3 environment and install libraries. It's easier to install some python libraries as system packages, so we add the `--system-site-packages` option to pull those in.
> Note: I couldn't get `python-ldap` to install correctly, and I don't need it. I commented out the `python-ldap` line in `requirements.txt` and it all built and installed correctly. If you don't intend to use LDAP authentication, you'll be fine. If you need LDAP authentication, it probably won't work.
```bash
python3 -m venv /web/python --system-site-packages
source /web/python/bin/activate
/web/python/bin/python3 -m pip install --upgrade pip wheel
# this command comments out python-ldap
perl -pi -e 's,^python-ldap,\# python-ldap,' requirements.txt
pip3 install -r requirements.txt
```
## Configuring PowerDNS-Admin
NOTE: The default config file is located at `./powerdnsadmin/default_config.py`. If you want to load another one, please set the `FLASK_CONF` environment variable. E.g.
```bash
cp configs/development.py /opt/powerdns-admin/production.py
export FLASK_CONF=/opt/powerdns-admin/production.py
```
### Update the Flask config
Edit your flask python configuration. Insert values for the database server, user name, password, etc.
```bash
vim $FLASK_CONF
```
Edit the values below to something sensible
```python
### BASIC APP CONFIG
SALT = '[something]'
SECRET_KEY = '[something]'
BIND_ADDRESS = '0.0.0.0'
PORT = 9191
OFFLINE_MODE = False
### DATABASE CONFIG
SQLA_DB_USER = 'pda'
SQLA_DB_PASSWORD = 'changeme'
SQLA_DB_HOST = '127.0.0.1'
SQLA_DB_NAME = 'pda'
SQLALCHEMY_TRACK_MODIFICATIONS = True
```
Be sure to uncomment one of the lines like `SQLALCHEMY_DATABASE_URI`.
### Initialise the database
```bash
export FLASK_APP=powerdnsadmin/__init__.py
flask db upgrade
```
### Build web assets
```bash
yarn install --pure-lockfile
flask assets build
```
## Running PowerDNS-Admin
Now you can run PowerDNS-Admin by command
```bash
./run.py
```
Open your web browser and go to `http://localhost:9191` to visit PowerDNS-Admin web interface. Register a user. The first user will be in the Administrator role.
### Running at startup
This is good for testing, but for production usage, you should use gunicorn or uwsgi. See [Running PowerDNS Admin with Systemd, Gunicorn and Nginx](../web-server/Running-PowerDNS-Admin-with-Systemd,-Gunicorn--and--Nginx.md) for instructions.
The right approach long-term is to create a startup script in `/usr/local/etc/rc.d` and enable it through `/etc/rc.conf`.

View File

@ -1,73 +0,0 @@
This describes how to run Apache2 on the host system with a reverse proxy directing to the docker container
This is usually used to add ssl certificates and prepend a subdirectory
The network_mode host settings is not neccessary but used for ldap availability in this case
docker-compose.yml
```
version: "3"
services:
app:
image: powerdnsadmin/pda-legacy:latest
container_name: powerdns
restart: always
network_mode: "host"
logging:
driver: json-file
options:
max-size: 50m
environment:
- BIND_ADDRESS=127.0.0.1:8082
- SECRET_KEY='NotVerySecret'
- SQLALCHEMY_DATABASE_URI=mysql://pdnsadminuser:password@127.0.0.1/powerdnsadmin
- GUNICORN_TIMEOUT=60
- GUNICORN_WORKERS=2
- GUNICORN_LOGLEVEL=DEBUG
- OFFLINE_MODE=False
- CSRF_COOKIE_SECURE=False
- SCRIPT_NAME=/powerdns
```
After running the Container create the static directory and populate
```
mkdir -p /var/www/powerdns
docker cp powerdns:/app/powerdnsadmin/static /var/www/powerdns/
chown -R root:www-data /var/www/powerdns
```
Adjust the static reference, static/assets/css has a hardcoded reference
```
sed -i 's/\/static/\/powerdns\/static/' /var/www/powerdns/static/assets/css/*
```
Apache Config:
You can set the SCRIPT_NAME environment using Apache as well, once is sufficient though
```
<Location /powerdns>
RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Forwarded-Port "443"
RequestHeader set SCRIPT_NAME "/powerdns"
ProxyPreserveHost On
</Location>
ProxyPass /powerdns/static !
ProxyPass /powerdns http://127.0.0.1:8082/powerdns
ProxyPassReverse /powerdns http://127.0.0.1:8082/powerdns
Alias /powerdns/static "/var/www/powerdns/static"
<Directory "/var/www/powerdns/static">
Options None
#Options +Indexes
AllowOverride None
Order allow,deny
Allow from all
</Directory>
```

View File

@ -1,97 +0,0 @@
Following is an example showing how to run PowerDNS-Admin with systemd, gunicorn and Apache:
The systemd and gunicorn setup are the same as for with nginx. This set of configurations assumes you have installed your PowerDNS-Admin under /opt/powerdns-admin and are running with a package-installed gunicorn.
## Configure systemd service
`$ sudo vim /etc/systemd/system/powerdns-admin.service`
```
[Unit]
Description=PowerDNS web administration service
Requires=powerdns-admin.socket
Wants=network.target
After=network.target mysqld.service postgresql.service slapd.service mariadb.service
[Service]
PIDFile=/run/powerdns-admin/pid
User=pdnsa
Group=pdnsa
WorkingDirectory=/opt/powerdns-admin
ExecStart=/usr/bin/gunicorn-3.6 --workers 4 --log-level info --pid /run/powerdns-admin/pid --bind unix:/run/powerdns-admin/socket "powerdnsadmin:create_app(config='config.py')"
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
PrivateTmp=true
Restart=on-failure
RestartSec=10
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
```
`$ sudo vim /etc/systemd/system/powerdns-admin.socket`
```
[Unit]
Description=PowerDNS-Admin socket
[Socket]
ListenStream=/run/powerdns-admin/socket
[Install]
WantedBy=sockets.target
```
`$ sudo vim /etc/tmpfiles.d/powerdns-admin.conf`
```
d /run/powerdns-admin 0755 pdnsa pdnsa -
```
Then `sudo systemctl daemon-reload; sudo systemctl start powerdns-admin.socket; sudo systemctl enable powerdns-admin.socket` to start the Powerdns-Admin service and make it run on boot.
## Sample Apache configuration
This includes SSL redirect.
```
<VirtualHost *:80>
ServerName dnsadmin.company.com
DocumentRoot "/opt/powerdns-admin"
<Directory "/opt/powerdns-admin">
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Require all granted
</Directory>
Redirect permanent / https://dnsadmin.company.com/
</VirtualHost>
<VirtualHost *:443>
ServerName dnsadmin.company.com
DocumentRoot "/opt/powerdns-admin/powerdnsadmin"
## Alias declarations for resources outside the DocumentRoot
Alias /static/ "/opt/powerdns-admin/powerdnsadmin/static/"
Alias /favicon.ico "/opt/powerdns-admin/powerdnsadmin/static/favicon.ico"
<Directory "/opt/powerdns-admin">
AllowOverride None
Require all granted
</Directory>
## Proxy rules
ProxyRequests Off
ProxyPreserveHost On
ProxyPass /static/ !
ProxyPass /favicon.ico !
ProxyPass / unix:/var/run/powerdns-admin/socket|http://%{HTTP_HOST}/
ProxyPassReverse / unix:/var/run/powerdns-admin/socket|http://%{HTTP_HOST}/
## SSL directives
SSLEngine on
SSLCertificateFile "/etc/pki/tls/certs/dnsadmin.company.com.crt"
SSLCertificateKeyFile "/etc/pki/tls/private/dnsadmin.company.com.key"
</VirtualHost>
```
## Notes
* The above assumes your installation is under /opt/powerdns-admin
* The hostname is assumed as dnsadmin.company.com
* gunicorn is installed in /usr/bin via a package (as in the case with CentOS/Redhat 7) and you have Python 3.6 installed. If you prefer to use flask then see the systemd configuration for nginx.
* On Ubuntu / Debian systems, you may need to enable the "proxy_http" module with `a2enmod proxy_http`

View File

@ -1,181 +0,0 @@
Following is an example showing how to run PowerDNS-Admin with systemd, gunicorn and nginx:
## Configure PowerDNS-Admin
Create PowerDNS-Admin config file and make the changes necessary for your use case. Make sure to change `SECRET_KEY` to a long random string that you generated yourself ([see Flask docs](https://flask.palletsprojects.com/en/1.1.x/config/#SECRET_KEY)), do not use the pre-defined one.
```
$ cp /opt/web/powerdns-admin/configs/development.py /opt/web/powerdns-admin/configs/production.py
$ vim /opt/web/powerdns-admin/configs/production.py
```
## Configure systemd service
`$ sudo vim /etc/systemd/system/powerdns-admin.service`
```
[Unit]
Description=PowerDNS-Admin
Requires=powerdns-admin.socket
After=network.target
[Service]
PIDFile=/run/powerdns-admin/pid
User=pdns
Group=pdns
WorkingDirectory=/opt/web/powerdns-admin
ExecStartPre=+mkdir -p /run/powerdns-admin/
ExecStartPre=+chown pdns:pdns -R /run/powerdns-admin/
ExecStart=/usr/local/bin/gunicorn --pid /run/powerdns-admin/pid --bind unix:/run/powerdns-admin/socket 'powerdnsadmin:create_app()'
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
```
`$ sudo systemctl edit powerdns-admin.service`
```
[Service]
Environment="FLASK_CONF=../configs/production.py"
```
`$ sudo vim /etc/systemd/system/powerdns-admin.socket`
```
[Unit]
Description=PowerDNS-Admin socket
[Socket]
ListenStream=/run/powerdns-admin/socket
[Install]
WantedBy=sockets.target
```
`$ sudo vim /etc/tmpfiles.d/powerdns-admin.conf`
```
d /run/powerdns-admin 0755 pdns pdns -
```
Then `sudo systemctl daemon-reload; sudo systemctl start powerdns-admin.socket; sudo systemctl enable powerdns-admin.socket` to start the Powerdns-Admin service and make it run on boot.
## Sample nginx configuration
```
server {
listen *:80;
server_name powerdns-admin.local www.powerdns-admin.local;
index index.html index.htm index.php;
root /opt/web/powerdns-admin;
access_log /var/log/nginx/powerdns-admin.local.access.log combined;
error_log /var/log/nginx/powerdns-admin.local.error.log;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_redirect off;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
proxy_buffer_size 8k;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_headers_hash_bucket_size 64;
location ~ ^/static/ {
include /etc/nginx/mime.types;
root /opt/web/powerdns-admin/powerdnsadmin;
location ~* \.(jpg|jpeg|png|gif)$ {
expires 365d;
}
location ~* ^.+.(css|js)$ {
expires 7d;
}
}
location / {
proxy_pass http://unix:/run/powerdns-admin/socket;
proxy_read_timeout 120;
proxy_connect_timeout 120;
proxy_redirect off;
}
}
```
<details>
<summary>Sample Nginx-Configuration for SSL</summary>
* Im binding this config to every dns-name with default_server...
* but you can remove it and set your server_name.
```
server {
listen 80 default_server;
server_name "";
return 301 https://$http_host$request_uri;
}
server {
listen 443 ssl http2 default_server;
server_name _;
index index.html index.htm;
error_log /var/log/nginx/error_powerdnsadmin.log error;
access_log off;
ssl_certificate path_to_your_fullchain_or_cert;
ssl_certificate_key path_to_your_key;
ssl_dhparam path_to_your_dhparam.pem;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_session_cache shared:SSL:10m;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_redirect off;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
proxy_buffer_size 8k;
proxy_set_header Host $http_host;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_headers_hash_bucket_size 64;
location ~ ^/static/ {
include mime.types;
root /opt/web/powerdns-admin/powerdnsadmin;
location ~* \.(jpg|jpeg|png|gif)$ { expires 365d; }
location ~* ^.+.(css|js)$ { expires 7d; }
}
location ~ ^/upload/ {
include mime.types;
root /opt/web/powerdns-admin;
location ~* \.(jpg|jpeg|png|gif)$ { expires 365d; }
location ~* ^.+.(css|js)$ { expires 7d; }
}
location / {
proxy_pass http://unix:/run/powerdns-admin/socket;
proxy_read_timeout 120;
proxy_connect_timeout 120;
proxy_redirect http:// $scheme://;
}
}
```
</details>
## Note
* `/opt/web/powerdns-admin` is the path to your powerdns-admin web directory
* Make sure you have installed gunicorn in flask virtualenv already.
* `powerdns-admin.local` just an example of your web domain name.

View File

@ -1,18 +0,0 @@
Following is an example showing how to run PowerDNS-Admin with supervisord
Create supervisord program config file
```
$ sudo vim /etc/supervisor.d/powerdnsadmin.conf
```
```
[program:powerdnsadmin]
command=/opt/web/powerdns-admin/flask/bin/python ./run.py
stdout_logfile=/var/log/supervisor/program_powerdnsadmin.log
stderr_logfile=/var/log/supervisor/program_powerdnsadmin.error
autostart=true
autorestart=true
directory=/opt/web/powerdns-admin
```
Then `sudo supervisorctl start powerdnsadmin` to start the Powerdns-Admin service.

View File

@ -1,50 +0,0 @@
## Configure systemd service
This example uses package-installed gunicorn (instead of flask-installed) and PowerDNS-Admin installed under /opt/powerdns-admin
`$ sudo vim /etc/systemd/system/powerdns-admin.service`
```
[Unit]
Description=PowerDNS web administration service
Requires=powerdns-admin.socket
Wants=network.target
After=network.target mysqld.service postgresql.service slapd.service mariadb.service
[Service]
PIDFile=/run/powerdns-admin/pid
User=pdnsa
Group=pdnsa
WorkingDirectory=/opt/powerdns-admin
ExecStart=/usr/bin/gunicorn-3.6 --workers 4 --log-level info --pid /run/powerdns-admin/pid --bind unix:/run/powerdns-admin/socket "powerdnsadmin:create_app(config='config.py')"
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
PrivateTmp=true
Restart=on-failure
RestartSec=10
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
```
`$ sudo vim /etc/systemd/system/powerdns-admin.socket`
```
[Unit]
Description=PowerDNS-Admin socket
[Socket]
ListenStream=/run/powerdns-admin/socket
[Install]
WantedBy=sockets.target
```
`$ sudo vim /etc/tmpfiles.d/powerdns-admin.conf`
```
d /run/powerdns-admin 0755 pdns pdns -
```
Then `sudo systemctl daemon-reload; sudo systemctl start powerdns-admin.socket; sudo systemctl enable powerdns-admin.socket` to start the Powerdns-Admin service and make it run on boot.

View File

@ -1,100 +0,0 @@
How to run PowerDNS-Admin via WSGI and Apache2.4 using mod_wsgi.
**Note**: You must install mod_wsgi by using pip3 instead of system default mod_wsgi!!!
### Ubuntu/Debian
```shell
# apt install apache2-dev
# virtualenv -p python3 flask
# source ./flask/bin/activate
(flask) # pip3 install mod-wsgi
(flask) # mod_wsgi-express install-module > /etc/apache2/mods-available/wsgi.load
(flask) # a2enmod wsgi
(flask) # systemctl restart apache2
```
### CentOS
```shell
# yum install httpd-devel
# virtualenv -p python3 flask
# source ./flask/bin/activate
(flask) # pip3 install mod-wsgi
(flask) # mod_wsgi-express install-module > /etc/httpd/conf.modules.d/02-wsgi.conf
(flask) # systemctl restart httpd
```
### Fedora
```bash
# Install Apache's Development interfaces and package requirements
dnf install httpd-devel gcc gc make
virtualenv -p python3 flask
source ./flask/bin/activate
# Install WSGI for HTTPD
pip install mod_wsgi-httpd
# Install WSGI
pip install mod-wsgi
# Enable the module in Apache:
mod_wsgi-express install-module > /etc/httpd/conf.modules.d/02-wsgi.conf
systemctl restart httpd
```
Apache vhost configuration;
```apache
<VirtualHost *:443>
ServerName superawesomedns.foo.bar
ServerAlias [fe80::1]
ServerAdmin webmaster@foo.bar
SSLEngine On
SSLCertificateFile /some/path/ssl/certs/cert.pem
SSLCertificateKeyFile /some/path/ssl/private/cert.key
ErrorLog /var/log/apache2/error-superawesomedns.foo.bar.log
CustomLog /var/log/apache2/access-superawesomedns.foo.bar.log combined
DocumentRoot /srv/vhosts/superawesomedns.foo.bar/
WSGIDaemonProcess pdnsadmin user=pdnsadmin group=pdnsadmin threads=5
WSGIScriptAlias / /srv/vhosts/superawesomedns.foo.bar/powerdnsadmin.wsgi
# pass BasicAuth on to the WSGI process
WSGIPassAuthorization On
<Directory "/srv/vhosts/superawesomedns.foo.bar/">
WSGIProcessGroup pdnsadmin
WSGIApplicationGroup %{GLOBAL}
AllowOverride None
Options +ExecCGI +FollowSymLinks
SSLRequireSSL
AllowOverride None
Require all granted
</Directory>
</VirtualHost>
```
**In Fedora, you might want to change the following line:**
```apache
WSGIDaemonProcess pdnsadmin socket-user=apache user=pdnsadmin group=pdnsadmin threads=5
```
**And you should add the following line to `/etc/httpd/conf/httpd.conf`:**
```apache
WSGISocketPrefix /var/run/wsgi
```
Content of `/srv/vhosts/superawesomedns.foo.bar/powerdnsadmin.wsgi`;
```python
#!/usr/bin/env python3
import sys
sys.path.insert(0, '/srv/vhosts/superawesomedns.foo.bar')
from app import app as application
```
Starting from 0.2 version, the `powerdnsadmin.wsgi` file is slighty different :
```python
#!/usr/bin/env python3
import sys
sys.path.insert(0, '/srv/vhosts/superawesomedns.foo.bar')
from powerdnsadmin import create_app
application = create_app()
```
(this implies that the pdnsadmin user/group exists, and that you have mod_wsgi loaded)

View File

@ -1,56 +0,0 @@
# uWSGI Example
This guide will show you how to run PowerDNS-Admin via uWSGI and nginx. This guide was written using Debian 8 with the following software versions:
- nginx 1.6.2
- uwsgi 2.0.7-debian
- python 2.7.9
## Software installation:
1. apt install the following packages:
- `uwsgi`
- `uwsgi-plugin-python`
- `nginx`
## Step-by-step instructions
1. Create a uWSGI .ini in `/etc/uwsgi/apps-enabled` with the following contents, making sure to replace the chdir, pythonpath and virtualenv directories with where you've installed PowerDNS-Admin:
```ini
[uwsgi]
plugins = python27
uid=www-data
gid=www-data
chdir = /opt/pdns-admin/PowerDNS-Admin/
pythonpath = /opt/pdns-admin/PowerDNS-Admin/
virtualenv = /opt/pdns-admin/PowerDNS-Admin/flask
mount = /pdns=powerdnsadmin:create_app()
manage-script-name = true
vacuum = true
harakiri = 20
buffer-size = 32768
post-buffering = 8192
socket = /run/uwsgi/app/%n/%n.socket
chown-socket = www-data
pidfile = /run/uwsgi/app/%n/%n.pid
daemonize = /var/log/uwsgi/app/%n.log
enable-threads
```
2. Add the following configuration to your nginx config:
```nginx
location / { try_files $uri @pdns_admin; }
location @pdns_admin {
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/app/pdns-admin/pdns-admin.socket;
}
location /pdns/static/ {
alias /opt/pdns-admin/PowerDNS-Admin/app/static/;
}
```
3. Restart nginx and uwsgi.
4. You're done and PowerDNS-Admin will now be available via nginx.

View File

@ -19,7 +19,7 @@ logger = logging.getLogger('alembic.env')
# target_metadata = mymodel.Base.metadata
from flask import current_app
config.set_main_option('sqlalchemy.url',
current_app.config.get('SQLALCHEMY_DATABASE_URI').replace("%","%%"))
current_app.config.get('SQLALCHEMY_DATABASE_URI'))
target_metadata = current_app.extensions['migrate'].db.metadata
# other values from the config, defined by the needs of env.py,

View File

@ -1,41 +0,0 @@
"""add apikey account mapping table
Revision ID: 0967658d9c0d
Revises: 0d3d93f1c2e0
Create Date: 2021-11-13 22:28:46.133474
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '0967658d9c0d'
down_revision = '0d3d93f1c2e0'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('apikey_account',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('apikey_id', sa.Integer(), nullable=False),
sa.Column('account_id', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['account_id'], ['account.id'], ),
sa.ForeignKeyConstraint(['apikey_id'], ['apikey.id'], ),
sa.PrimaryKeyConstraint('id')
)
with op.batch_alter_table('history', schema=None) as batch_op:
batch_op.create_index(batch_op.f('ix_history_created_on'), ['created_on'], unique=False)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table('history', schema=None) as batch_op:
batch_op.drop_index(batch_op.f('ix_history_created_on'))
op.drop_table('apikey_account')
# ### end Alembic commands ###

View File

@ -1,34 +0,0 @@
"""Add domain_id to history table
Revision ID: 0d3d93f1c2e0
Revises: 3f76448bb6de
Create Date: 2021-02-15 17:23:05.688241
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '0d3d93f1c2e0'
down_revision = '3f76448bb6de'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table('history', schema=None) as batch_op:
batch_op.add_column(sa.Column('domain_id', sa.Integer(), nullable=True))
batch_op.create_foreign_key('fk_domain_id', 'domain', ['domain_id'], ['id'])
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table('history', schema=None) as batch_op:
batch_op.drop_constraint('fk_domain_id', type_='foreignkey')
batch_op.drop_column('domain_id')
# ### end Alembic commands ###

View File

@ -18,12 +18,8 @@ depends_on = None
def upgrade():
with op.batch_alter_table('user') as batch_op:
batch_op.add_column(
sa.Column('confirmed', sa.Boolean(), nullable=True,
sa.Column('confirmed', sa.Boolean(), nullable=False,
default=False))
with op.batch_alter_table('user') as batch_op:
user = sa.sql.table('user', sa.sql.column('confirmed'))
batch_op.execute(user.update().values(confirmed=False))
batch_op.alter_column('confirmed', nullable=False, existing_type=sa.Boolean(), existing_nullable=True, existing_server_default=False)
def downgrade():

View File

@ -1,46 +0,0 @@
"""Fix typo in history detail
Revision ID: 6ea7dc05f496
Revises: fbc7cf864b24
Create Date: 2022-05-10 10:16:58.784497
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '6ea7dc05f496'
down_revision = 'fbc7cf864b24'
branch_labels = None
depends_on = None
history_table = sa.sql.table('history',
sa.Column('detail', sa.Text),
)
def upgrade():
op.execute(
history_table.update()
.where(history_table.c.detail.like('%"add_rrests":%'))
.values({
'detail': sa.func.replace(
sa.func.replace(history_table.c.detail, '"add_rrests":', '"add_rrsets":'),
'"del_rrests":', '"del_rrsets":'
)
})
)
def downgrade():
op.execute(
history_table.update()
.where(history_table.c.detail.like('%"add_rrsets":%'))
.values({
'detail': sa.func.replace(
sa.func.replace(history_table.c.detail, '"add_rrsets":', '"add_rrests":'),
'"del_rrsets":', '"del_rrests":'
)
})
)

View File

@ -56,9 +56,9 @@ def seed_data():
op.bulk_insert(template_table,
[
{'id': 1, 'name': 'basic_template_1', 'description': 'Basic Template #1'},
{'id': 2, 'name': 'basic_template_2', 'description': 'Basic Template #2'},
{'id': 3, 'name': 'basic_template_3', 'description': 'Basic Template #3'}
{id: 1, 'name': 'basic_template_1', 'description': 'Basic Template #1'},
{id: 2, 'name': 'basic_template_2', 'description': 'Basic Template #2'},
{id: 3, 'name': 'basic_template_3', 'description': 'Basic Template #3'}
]
)

View File

@ -1,24 +0,0 @@
"""Add unique index to settings table keys
Revision ID: b24bf17725d2
Revises: f41520e41cee
Create Date: 2023-02-18 00:00:00.000000
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = 'b24bf17725d2'
down_revision = 'f41520e41cee'
branch_labels = None
depends_on = None
def upgrade():
op.create_index(op.f('ix_setting_name'), 'setting', ['name'], unique=True)
def downgrade():
op.drop_index(op.f('ix_setting_name'), table_name='setting')

View File

@ -1,31 +0,0 @@
"""update domain type length
Revision ID: f41520e41cee
Revises: 6ea7dc05f496
Create Date: 2023-01-10 11:56:28.538485
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = 'f41520e41cee'
down_revision = '6ea7dc05f496'
branch_labels = None
depends_on = None
def upgrade():
with op.batch_alter_table('domain') as batch_op:
batch_op.alter_column('type',
existing_type=sa.String(length=6),
type_=sa.String(length=8))
def downgrade():
with op.batch_alter_table('domain') as batch_op:
batch_op.alter_column('type',
existing_type=sa.String(length=8),
type_=sa.String(length=6))

View File

@ -1,47 +0,0 @@
"""update history detail quotes
Revision ID: fbc7cf864b24
Revises: 0967658d9c0d
Create Date: 2022-05-04 19:49:54.054285
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = 'fbc7cf864b24'
down_revision = '0967658d9c0d'
branch_labels = None
depends_on = None
def upgrade():
history_table = sa.sql.table(
'history',
sa.Column('id', sa.Integer),
sa.Column('msg', sa.String),
sa.Column('detail', sa.Text),
sa.Column('created_by', sa.String),
sa.Column('created_on', sa.DateTime),
sa.Column('domain_id', sa.Integer)
)
op.execute(
history_table.update().where(
sa.and_(
history_table.c.detail.like("%'%"),
history_table.c.detail.notlike("%rrests%"),
history_table.c.detail.notlike("%rrsets%")
)
).values({
'detail': sa.func.replace(
history_table.c.detail,
"'",
'"'
)
})
)
def downgrade():
pass

View File

@ -1,22 +1,14 @@
{
"dependencies": {
"@fortawesome/fontawesome-free": "6.3.0",
"admin-lte": "3.2.0",
"bootstrap": "4.6.2",
"bootstrap-datepicker": "^1.9.0",
"admin-lte": "2.4.9",
"bootstrap": "^3.4.1",
"bootstrap-validator": "^0.11.9",
"datatables.net-plugins": "^1.13.1",
"datatables.net-plugins": "^1.10.19",
"icheck": "^1.0.2",
"jquery-slimscroll": "^1.3.8",
"jquery-sparkline": "^2.4.0",
"jquery-ui-dist": "^1.13.2",
"jquery-ui-dist": "^1.12.1",
"jquery.quicksearch": "^2.4.0",
"jquery-validation": "^1.19.5",
"jtimeout": "^3.2.0",
"knockout": "^3.5.1",
"jtimeout": "^3.1.0",
"multiselect": "^0.9.12"
},
"resolutions": {
"admin-lte/@fortawesome/fontawesome-free": "6.3.0"
}
}

View File

@ -1,14 +1,14 @@
import os
import logging
from flask import Flask
from flask_seasurf import SeaSurf
from flask_mail import Mail
from werkzeug.middleware.proxy_fix import ProxyFix
from flask_session import Session
from .lib import utils
def create_app(config=None):
from powerdnsadmin.lib.settings import AppSettings
from . import models, routes, services
from .assets import assets
app = Flask(__name__)
@ -32,6 +32,29 @@ def create_app(config=None):
# Proxy
app.wsgi_app = ProxyFix(app.wsgi_app)
# CSRF protection
csrf = SeaSurf(app)
csrf.exempt(routes.index.dyndns_checkip)
csrf.exempt(routes.index.dyndns_update)
csrf.exempt(routes.index.saml_authorized)
csrf.exempt(routes.api.api_login_create_zone)
csrf.exempt(routes.api.api_login_delete_zone)
csrf.exempt(routes.api.api_generate_apikey)
csrf.exempt(routes.api.api_delete_apikey)
csrf.exempt(routes.api.api_update_apikey)
csrf.exempt(routes.api.api_zone_subpath_forward)
csrf.exempt(routes.api.api_zone_forward)
csrf.exempt(routes.api.api_create_zone)
csrf.exempt(routes.api.api_create_account)
csrf.exempt(routes.api.api_delete_account)
csrf.exempt(routes.api.api_update_account)
csrf.exempt(routes.api.api_create_user)
csrf.exempt(routes.api.api_delete_user)
csrf.exempt(routes.api.api_update_user)
csrf.exempt(routes.api.api_list_account_users)
csrf.exempt(routes.api.api_add_account_user)
csrf.exempt(routes.api.api_remove_account_user)
# Load config from env variables if using docker
if os.path.exists(os.path.join(app.root_path, 'docker_config.py')):
app.config.from_object('powerdnsadmin.docker_config')
@ -43,32 +66,18 @@ def create_app(config=None):
if 'FLASK_CONF' in os.environ:
app.config.from_envvar('FLASK_CONF')
# Load app specified configuration
# Load app sepecified configuration
if config is not None:
if isinstance(config, dict):
app.config.update(config)
elif config.endswith('.py'):
app.config.from_pyfile(config)
# Load any settings defined with environment variables
AppSettings.load_environment(app)
# HSTS
if app.config.get('HSTS_ENABLED'):
from flask_sslify import SSLify
_sslify = SSLify(app) # lgtm [py/unused-local-variable]
# Load Flask-Session
app.config['SESSION_TYPE'] = app.config.get('SESSION_TYPE')
if 'SESSION_TYPE' in os.environ:
app.config['SESSION_TYPE'] = os.environ.get('SESSION_TYPE')
sess = Session(app)
# create sessions table if using sqlalchemy backend
if os.environ.get('SESSION_TYPE') == 'sqlalchemy':
sess.app.session_interface.db.create_all()
# SMTP
app.mail = Mail(app)
@ -82,12 +91,12 @@ def create_app(config=None):
app.jinja_env.filters['display_record_name'] = utils.display_record_name
app.jinja_env.filters['display_master_name'] = utils.display_master_name
app.jinja_env.filters['display_second_to_time'] = utils.display_time
app.jinja_env.filters['display_setting_state'] = utils.display_setting_state
app.jinja_env.filters['pretty_domain_name'] = utils.pretty_domain_name
app.jinja_env.filters['format_datetime_local'] = utils.format_datetime
app.jinja_env.filters['format_zone_type'] = utils.format_zone_type
app.jinja_env.filters[
'email_to_gravatar_url'] = utils.email_to_gravatar_url
app.jinja_env.filters[
'display_setting_state'] = utils.display_setting_state
# Register context processors
# Register context proccessors
from .models.setting import Setting
@app.context_processor
@ -100,4 +109,9 @@ def create_app(config=None):
setting = Setting()
return dict(SETTING=setting)
@app.context_processor
def inject_mode():
setting = app.config.get('OFFLINE_MODE', False)
return dict(OFFLINE_MODE=setting)
return app

View File

@ -4,65 +4,62 @@ from flask_assets import Bundle, Environment, Filter
class ConcatFilter(Filter):
"""
Filter that merges files, placing a semicolon between them.
Fixes issues caused by missing semicolons at end of JS assets, for example
with last statement of jquery.pjax.js.
"""
def concat(self, out, hunks, **kw):
out.write(';'.join([h.data() for h, info in hunks]))
css_login = Bundle(
'node_modules/@fortawesome/fontawesome-free/css/all.css',
'node_modules/icheck/skins/square/blue.css',
'node_modules/admin-lte/dist/css/adminlte.css',
filters=('rcssmin', 'cssrewrite'),
output='generated/login.css')
css_login = Bundle('node_modules/bootstrap/dist/css/bootstrap.css',
'node_modules/font-awesome/css/font-awesome.css',
'node_modules/ionicons/dist/css/ionicons.css',
'node_modules/icheck/skins/square/blue.css',
'node_modules/admin-lte/dist/css/AdminLTE.css',
filters=('cssmin', 'cssrewrite'),
output='generated/login.css')
js_login = Bundle(
'node_modules/jquery/dist/jquery.js',
'node_modules/bootstrap/dist/js/bootstrap.js',
'node_modules/icheck/icheck.js',
'node_modules/knockout/build/output/knockout-latest.js',
'custom/js/custom.js',
filters=(ConcatFilter, 'rjsmin'),
output='generated/login.js')
js_login = Bundle('node_modules/jquery/dist/jquery.js',
'node_modules/bootstrap/dist/js/bootstrap.js',
'node_modules/icheck/icheck.js',
filters=(ConcatFilter, 'jsmin'),
output='generated/login.js')
js_validation = Bundle(
'node_modules/bootstrap-validator/dist/validator.js',
output='generated/validation.js')
js_validation = Bundle('node_modules/bootstrap-validator/dist/validator.js',
output='generated/validation.js')
css_main = Bundle(
'node_modules/@fortawesome/fontawesome-free/css/all.css',
'node_modules/datatables.net-bs4/css/dataTables.bootstrap4.css',
'node_modules/bootstrap/dist/css/bootstrap.css',
'node_modules/font-awesome/css/font-awesome.css',
'node_modules/ionicons/dist/css/ionicons.css',
'node_modules/datatables.net-bs/css/dataTables.bootstrap.css',
'node_modules/icheck/skins/square/blue.css',
'node_modules/multiselect/css/multi-select.css',
'node_modules/admin-lte/dist/css/adminlte.css',
'node_modules/admin-lte/dist/css/AdminLTE.css',
'node_modules/admin-lte/dist/css/skins/_all-skins.css',
'custom/css/custom.css',
'node_modules/bootstrap-datepicker/dist/css/bootstrap-datepicker.css',
filters=('rcssmin', 'cssrewrite'),
filters=('cssmin', 'cssrewrite'),
output='generated/main.css')
js_main = Bundle(
'node_modules/jquery/dist/jquery.js',
'node_modules/jquery-ui-dist/jquery-ui.js',
'node_modules/bootstrap/dist/js/bootstrap.bundle.js',
'node_modules/datatables.net/js/jquery.dataTables.js',
'node_modules/datatables.net-bs4/js/dataTables.bootstrap4.js',
'node_modules/jquery-sparkline/jquery.sparkline.js',
'node_modules/jquery-slimscroll/jquery.slimscroll.js',
'node_modules/jquery-validation/dist/jquery.validate.js',
'node_modules/icheck/icheck.js',
'node_modules/fastclick/lib/fastclick.js',
'node_modules/moment/moment.js',
'node_modules/admin-lte/dist/js/adminlte.js',
'node_modules/multiselect/js/jquery.multi-select.js',
'node_modules/datatables.net-plugins/sorting/natural.js',
'node_modules/jtimeout/src/jTimeout.js',
'node_modules/jquery.quicksearch/src/jquery.quicksearch.js',
'node_modules/knockout/build/output/knockout-latest.js',
'custom/js/app-authentication-settings-editor.js',
'custom/js/custom.js',
'node_modules/bootstrap-datepicker/dist/js/bootstrap-datepicker.js',
filters=(ConcatFilter, 'rjsmin'),
output='generated/main.js')
js_main = Bundle('node_modules/jquery/dist/jquery.js',
'node_modules/jquery-ui-dist/jquery-ui.js',
'node_modules/bootstrap/dist/js/bootstrap.js',
'node_modules/datatables.net/js/jquery.dataTables.js',
'node_modules/datatables.net-bs/js/dataTables.bootstrap.js',
'node_modules/jquery-sparkline/jquery.sparkline.js',
'node_modules/jquery-slimscroll/jquery.slimscroll.js',
'node_modules/icheck/icheck.js',
'node_modules/fastclick/lib/fastclick.js',
'node_modules/moment/moment.js',
'node_modules/admin-lte/dist/js/adminlte.js',
'node_modules/multiselect/js/jquery.multi-select.js',
'node_modules/datatables.net-plugins/sorting/natural.js',
'node_modules/jtimeout/src/jTimeout.js',
'node_modules/jquery.quicksearch/src/jquery.quicksearch.js',
'custom/js/custom.js',
filters=(ConcatFilter, 'jsmin'),
output='generated/main.js')
assets = Environment()
assets.register('js_login', js_login)

View File

@ -1,19 +1,18 @@
import base64
import binascii
from functools import wraps
from flask import g, request, abort, current_app, Response
from flask import g, request, abort, current_app, render_template
from flask_login import current_user
from .models import User, ApiKey, Setting, Domain, Setting
from .lib.errors import RequestIsNotJSON, NotEnoughPrivileges, RecordTTLNotAllowed, RecordTypeNotAllowed
from .lib.errors import DomainAccessForbidden, DomainOverrideForbidden
from .lib.errors import RequestIsNotJSON, NotEnoughPrivileges
from .lib.errors import DomainAccessForbidden
def admin_role_required(f):
"""
Grant access if user is in Administrator role
"""
@wraps(f)
def decorated_function(*args, **kwargs):
if current_user.role.name != 'Administrator':
@ -27,7 +26,6 @@ def operator_role_required(f):
"""
Grant access if user is in Operator role or higher
"""
@wraps(f)
def decorated_function(*args, **kwargs):
if current_user.role.name not in ['Administrator', 'Operator']:
@ -37,22 +35,6 @@ def operator_role_required(f):
return decorated_function
def history_access_required(f):
"""
Grant access if user is in Operator role or higher, or Users can view history
"""
@wraps(f)
def decorated_function(*args, **kwargs):
if current_user.role.name not in [
'Administrator', 'Operator'
] and not Setting().get('allow_user_view_history'):
abort(403)
return f(*args, **kwargs)
return decorated_function
def can_access_domain(f):
"""
Grant access if:
@ -60,7 +42,6 @@ def can_access_domain(f):
- user is in granted Account, or
- user is in granted Domain
"""
@wraps(f)
def decorated_function(*args, **kwargs):
if current_user.role.name not in ['Administrator', 'Operator']:
@ -87,11 +68,10 @@ def can_configure_dnssec(f):
- user is in Operator role or higher, or
- dnssec_admins_only is off
"""
@wraps(f)
def decorated_function(*args, **kwargs):
if current_user.role.name not in [
'Administrator', 'Operator'
'Administrator', 'Operator'
] and Setting().get('dnssec_admins_only'):
abort(403)
@ -100,35 +80,16 @@ def can_configure_dnssec(f):
return decorated_function
def can_remove_domain(f):
"""
Grant access if:
- user is in Operator role or higher, or
- allow_user_remove_domain is on
"""
@wraps(f)
def decorated_function(*args, **kwargs):
if current_user.role.name not in [
'Administrator', 'Operator'
] and not Setting().get('allow_user_remove_domain'):
abort(403)
return f(*args, **kwargs)
return decorated_function
def can_create_domain(f):
"""
Grant access if:
- user is in Operator role or higher, or
- allow_user_create_domain is on
"""
@wraps(f)
def decorated_function(*args, **kwargs):
if current_user.role.name not in [
'Administrator', 'Operator'
'Administrator', 'Operator'
] and not Setting().get('allow_user_create_domain'):
abort(403)
return f(*args, **kwargs)
@ -140,64 +101,50 @@ def api_basic_auth(f):
@wraps(f)
def decorated_function(*args, **kwargs):
auth_header = request.headers.get('Authorization')
if auth_header:
auth_header = auth_header.replace('Basic ', '', 1)
if not auth_header:
try:
auth_header = str(base64.b64decode(auth_header), 'utf-8')
username, password = auth_header.split(":")
except binascii.Error as e:
current_app.logger.error(
'Invalid base64-encoded of credential. Error {0}'.format(
e))
abort(401)
except TypeError as e:
current_app.logger.error('Error: {0}'.format(e))
abort(401)
user = User(username=username,
password=password,
plain_text_password=password)
try:
if Setting().get('verify_user_email') and user.email and not user.confirmed:
current_app.logger.warning(
'Basic authentication failed for user {} because of unverified email address'
.format(username))
abort(401)
auth_method = request.args.get('auth_method', 'LOCAL')
auth_method = 'LDAP' if auth_method != 'LOCAL' else 'LOCAL'
auth = user.is_validate(method=auth_method,
src_ip=request.remote_addr)
if not auth:
current_app.logger.error('Checking user password failed')
abort(401)
else:
user = User.query.filter(User.username == username).first()
current_user = user # lgtm [py/unused-local-variable]
except Exception as e:
current_app.logger.error('Error: {0}'.format(e))
abort(401)
else:
current_app.logger.error('Error: Authorization header missing!')
abort(401)
if auth_header[:6] != "Basic ":
current_app.logger.error('Error: Unsupported authorization mechanism!')
abort(401)
# Remove "Basic " from the header value
auth_header = auth_header[6:]
auth_components = []
try:
auth_header = str(base64.b64decode(auth_header), 'utf-8')
# NK: We use auth_components here as we don't know if we'll have a colon,
# we split it maximum 1 times to grab the username, the rest of the string would be the password.
auth_components = auth_header.split(':', maxsplit=1)
except (binascii.Error, UnicodeDecodeError) as e:
current_app.logger.error(
'Invalid base64-encoded of credential. Error {0}'.format(
e))
abort(401)
except TypeError as e:
current_app.logger.error('Error: {0}'.format(e))
abort(401)
# If we don't have two auth components (username, password), we can abort
if len(auth_components) != 2:
abort(401)
(username, password) = auth_components
user = User(username=username,
password=password,
plain_text_password=password)
try:
if Setting().get('verify_user_email') and user.email and not user.confirmed:
current_app.logger.warning(
'Basic authentication failed for user {} because of unverified email address'
.format(username))
abort(401)
auth_method = request.args.get('auth_method', 'LOCAL')
auth_method = 'LDAP' if auth_method != 'LOCAL' else 'LOCAL'
auth = user.is_validate(method=auth_method, src_ip=request.remote_addr)
if not auth:
current_app.logger.error('Checking user password failed')
abort(401)
else:
user = User.query.filter(User.username == username).first()
current_user = user # lgtm [py/unused-local-variable]
except Exception as e:
current_app.logger.error('Error: {0}'.format(e))
abort(401)
return f(*args, **kwargs)
return decorated_function
@ -214,27 +161,6 @@ def is_json(f):
return decorated_function
def callback_if_request_body_contains_key(callback, http_methods=[], keys=[]):
"""
If request body contains one or more of specified keys, call
:param callback
"""
def decorator(f):
@wraps(f)
def decorated_function(*args, **kwargs):
check_current_http_method = not http_methods or request.method in http_methods
if (check_current_http_method and
set(request.get_json(force=True).keys()).intersection(set(keys))
):
callback(*args, **kwargs)
return f(*args, **kwargs)
return decorated_function
return decorator
def api_role_can(action, roles=None, allow_self=False):
"""
Grant access if:
@ -257,18 +183,16 @@ def api_role_can(action, roles=None, allow_self=False):
except:
username = None
if (
(current_user.role.name in roles) or
(allow_self and user_id and current_user.id == user_id) or
(allow_self and username and current_user.username == username)
(current_user.role.name in roles) or
(allow_self and user_id and current_user.id == user_id) or
(allow_self and username and current_user.username == username)
):
return f(*args, **kwargs)
msg = (
"User {} with role {} does not have enough privileges to {}"
).format(current_user.username, current_user.role.name, action)
raise NotEnoughPrivileges(message=msg)
return decorated_function
return decorator
@ -278,92 +202,27 @@ def api_can_create_domain(f):
- user is in Operator role or higher, or
- allow_user_create_domain is on
"""
@wraps(f)
def decorated_function(*args, **kwargs):
if current_user.role.name not in [
'Administrator', 'Operator'
'Administrator', 'Operator'
] and not Setting().get('allow_user_create_domain'):
msg = "User {0} does not have enough privileges to create zone"
msg = "User {0} does not have enough privileges to create domain"
current_app.logger.error(msg.format(current_user.username))
raise NotEnoughPrivileges()
if Setting().get('deny_domain_override'):
req = request.get_json(force=True)
domain = Domain()
if req['name'] and domain.is_overriding(req['name']):
raise DomainOverrideForbidden()
return f(*args, **kwargs)
return decorated_function
def apikey_can_create_domain(f):
"""
Grant access if:
- user is in Operator role or higher, or
- allow_user_create_domain is on
and
- deny_domain_override is off or
- override_domain is true (from request)
"""
@wraps(f)
def decorated_function(*args, **kwargs):
if g.apikey.role.name not in [
'Administrator', 'Operator'
] and not Setting().get('allow_user_create_domain'):
msg = "ApiKey #{0} does not have enough privileges to create zone"
current_app.logger.error(msg.format(g.apikey.id))
raise NotEnoughPrivileges()
if Setting().get('deny_domain_override'):
req = request.get_json(force=True)
domain = Domain()
if req['name'] and domain.is_overriding(req['name']):
raise DomainOverrideForbidden()
return f(*args, **kwargs)
return decorated_function
def apikey_can_remove_domain(http_methods=[]):
"""
Grant access if:
- user is in Operator role or higher, or
- allow_user_remove_domain is on
"""
def decorator(f):
@wraps(f)
def decorated_function(*args, **kwargs):
check_current_http_method = not http_methods or request.method in http_methods
if (check_current_http_method and
g.apikey.role.name not in ['Administrator', 'Operator'] and
not Setting().get('allow_user_remove_domain')
):
msg = "ApiKey #{0} does not have enough privileges to remove zone"
current_app.logger.error(msg.format(g.apikey.id))
raise NotEnoughPrivileges()
return f(*args, **kwargs)
return decorated_function
return decorator
def apikey_is_admin(f):
"""
Grant access if user is in Administrator role
"""
@wraps(f)
def decorated_function(*args, **kwargs):
if g.apikey.role.name != 'Administrator':
msg = "Apikey {0} does not have enough privileges to create zone"
msg = "Apikey {0} does not have enough privileges to create domain"
current_app.logger.error(msg.format(g.apikey.id))
raise NotEnoughPrivileges()
return f(*args, **kwargs)
@ -372,112 +231,21 @@ def apikey_is_admin(f):
def apikey_can_access_domain(f):
"""
Grant access if:
- user has Operator role or higher, or
- user has explicitly been granted access to domain
"""
@wraps(f)
def decorated_function(*args, **kwargs):
apikey = g.apikey
if g.apikey.role.name not in ['Administrator', 'Operator']:
zone_id = kwargs.get('zone_id').rstrip(".")
domain_names = [item.name for item in g.apikey.domains]
domains = apikey.domains
zone_id = kwargs.get('zone_id')
domain_names = [item.name for item in domains]
accounts = g.apikey.accounts
accounts_domains = [domain.name for a in accounts for domain in a.domains]
allowed_domains = set(domain_names + accounts_domains)
if zone_id not in allowed_domains:
if zone_id not in domain_names:
raise DomainAccessForbidden()
return f(*args, **kwargs)
return decorated_function
def apikey_can_configure_dnssec(http_methods=[]):
"""
Grant access if:
- user is in Operator role or higher, or
- dnssec_admins_only is off
"""
def decorator(f=None):
@wraps(f)
def decorated_function(*args, **kwargs):
check_current_http_method = not http_methods or request.method in http_methods
if (check_current_http_method and
g.apikey.role.name not in ['Administrator', 'Operator'] and
Setting().get('dnssec_admins_only')
):
msg = "ApiKey #{0} does not have enough privileges to configure dnssec"
current_app.logger.error(msg.format(g.apikey.id))
raise DomainAccessForbidden(message=msg)
return f(*args, **kwargs) if f else None
return decorated_function
return decorator
def allowed_record_types(f):
@wraps(f)
def decorated_function(*args, **kwargs):
if request.method in ['GET', 'DELETE', 'PUT']:
return f(*args, **kwargs)
if g.apikey.role.name in ['Administrator', 'Operator']:
return f(*args, **kwargs)
records_allowed_to_edit = Setting().get_records_allow_to_edit()
content = request.get_json()
try:
for record in content['rrsets']:
if 'type' not in record:
raise RecordTypeNotAllowed()
if record['type'] not in records_allowed_to_edit:
current_app.logger.error(f"Error: Record type not allowed: {record['type']}")
raise RecordTypeNotAllowed(message=f"Record type not allowed: {record['type']}")
except (TypeError, KeyError) as e:
raise e
return f(*args, **kwargs)
return decorated_function
def allowed_record_ttl(f):
@wraps(f)
def decorated_function(*args, **kwargs):
if not Setting().get('enforce_api_ttl'):
return f(*args, **kwargs)
if request.method == 'GET':
return f(*args, **kwargs)
if g.apikey.role.name in ['Administrator', 'Operator']:
return f(*args, **kwargs)
allowed_ttls = Setting().get_ttl_options()
allowed_numeric_ttls = [ttl[0] for ttl in allowed_ttls]
content = request.get_json()
try:
for record in content['rrsets']:
if 'ttl' not in record:
raise RecordTTLNotAllowed()
if record['ttl'] not in allowed_numeric_ttls:
current_app.logger.error(f"Error: Record TTL not allowed: {record['ttl']}")
raise RecordTTLNotAllowed(message=f"Record TTL not allowed: {record['ttl']}")
except (TypeError, KeyError) as e:
raise e
return f(*args, **kwargs)
return decorated_function
def apikey_auth(f):
@wraps(f)
def decorated_function(*args, **kwargs):
@ -485,8 +253,10 @@ def apikey_auth(f):
if auth_header:
try:
apikey_val = str(base64.b64decode(auth_header), 'utf-8')
except (binascii.Error, UnicodeDecodeError) as e:
current_app.logger.error('Invalid base64-encoded X-API-KEY. Error {0}'.format(e))
except binascii.Error as e:
current_app.logger.error(
'Invalid base64-encoded of credential. Error {0}'.format(
e))
abort(401)
except TypeError as e:
current_app.logger.error('Error: {0}'.format(e))
@ -517,19 +287,7 @@ def dyndns_login_required(f):
@wraps(f)
def decorated_function(*args, **kwargs):
if current_user.is_authenticated is False:
return Response(headers={'WWW-Authenticate': 'Basic'}, status=401)
return render_template('dyndns.html', response='badauth'), 200
return f(*args, **kwargs)
return decorated_function
def apikey_or_basic_auth(f):
@wraps(f)
def decorated_function(*args, **kwargs):
api_auth_header = request.headers.get('X-API-KEY')
if api_auth_header:
return apikey_auth(f)(*args, **kwargs)
else:
return api_basic_auth(f)(*args, **kwargs)
return decorated_function

View File

@ -1,32 +1,27 @@
import os
basedir = os.path.abspath(os.path.abspath(os.path.dirname(__file__)))
basedir = os.path.abspath(os.path.dirname(__file__))
BIND_ADDRESS = '0.0.0.0'
CAPTCHA_ENABLE = True
CAPTCHA_HEIGHT = 60
CAPTCHA_LENGTH = 6
CAPTCHA_SESSION_KEY = 'captcha_image'
CAPTCHA_WIDTH = 160
CSRF_COOKIE_HTTPONLY = True
HSTS_ENABLED = False
PORT = 9191
### BASIC APP CONFIG
SALT = '$2b$12$yLUMTIfl21FKJQpTkRQXCu'
SAML_ASSERTION_ENCRYPTED = True
SAML_ENABLED = False
SECRET_KEY = 'e951e5a1f4b94151b360f47edf596dd2'
SERVER_EXTERNAL_SSL = os.getenv('SERVER_EXTERNAL_SSL', True)
SESSION_COOKIE_SAMESITE = 'Lax'
SESSION_TYPE = 'sqlalchemy'
SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(basedir, 'pdns.db')
BIND_ADDRESS = '0.0.0.0'
PORT = 9191
HSTS_ENABLED = False
OFFLINE_MODE = False
### DATABASE CONFIG
SQLA_DB_USER = 'pda'
SQLA_DB_PASSWORD = 'changeme'
SQLA_DB_HOST = '127.0.0.1'
SQLA_DB_NAME = 'pda'
SQLALCHEMY_TRACK_MODIFICATIONS = True
# SQLA_DB_USER = 'pda'
# SQLA_DB_PASSWORD = 'changeme'
# SQLA_DB_HOST = '127.0.0.1'
# SQLA_DB_NAME = 'pda'
# SQLALCHEMY_DATABASE_URI = 'mysql://{}:{}@{}/{}'.format(
# urllib.parse.quote_plus(SQLA_DB_USER),
# urllib.parse.quote_plus(SQLA_DB_PASSWORD),
# SQLA_DB_HOST,
# SQLA_DB_NAME
# )
### DATABASE - MySQL
SQLALCHEMY_DATABASE_URI = 'mysql://'+SQLA_DB_USER+':'+SQLA_DB_PASSWORD+'@'+SQLA_DB_HOST+'/'+SQLA_DB_NAME
### DATABASE - SQLite
# SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(basedir, 'pdns.db')
# SAML Authnetication
SAML_ENABLED = False
SAML_ASSERTION_ENCRYPTED = True

View File

@ -1,58 +1,48 @@
import datetime
from OpenSSL import crypto
from datetime import datetime
import pytz
import os
from cryptography import x509
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.primitives import hashes, serialization
from cryptography.x509.oid import NameOID
CRYPT_PATH = os.path.abspath(os.path.dirname(os.path.realpath(__file__)) + "/../../")
CRYPT_PATH = os.path.abspath(os.path.dirname(os.path.realpath(__file__)) + "/../../")
CERT_FILE = CRYPT_PATH + "/saml_cert.crt"
KEY_FILE = CRYPT_PATH + "/saml_cert.key"
def check_certificate():
if not os.path.isfile(CERT_FILE):
return False
st_cert = open(CERT_FILE, 'rt').read()
cert = crypto.load_certificate(crypto.FILETYPE_PEM, st_cert)
now = datetime.now(pytz.utc)
begin = datetime.strptime(cert.get_notBefore(), "%Y%m%d%H%M%SZ").replace(tzinfo=pytz.UTC)
begin_ok = begin < now
end = datetime.strptime(cert.get_notAfter(), "%Y%m%d%H%M%SZ").replace(tzinfo=pytz.UTC)
end_ok = end > now
if begin_ok and end_ok:
return True
return False
def create_self_signed_cert():
""" Generate a new self-signed RSA-2048-SHA256 x509 certificate. """
# Generate our key
key = rsa.generate_private_key(
public_exponent=65537,
key_size=2048,
)
# Write our key to disk for safe keeping
with open(KEY_FILE, "wb") as key_file:
key_file.write(key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.NoEncryption(),
))
# create a key pair
k = crypto.PKey()
k.generate_key(crypto.TYPE_RSA, 2048)
# Various details about who we are. For a self-signed certificate the
# subject and issuer are always the same.
subject = issuer = x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, "DE"),
x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, "NRW"),
x509.NameAttribute(NameOID.LOCALITY_NAME, "Dortmund"),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, "Dummy Company Ltd"),
x509.NameAttribute(NameOID.ORGANIZATIONAL_UNIT_NAME, "Dummy Company Ltd"),
x509.NameAttribute(NameOID.COMMON_NAME, "PowerDNS-Admin"),
])
# create a self-signed cert
cert = crypto.X509()
cert.get_subject().C = "DE"
cert.get_subject().ST = "NRW"
cert.get_subject().L = "Dortmund"
cert.get_subject().O = "Dummy Company Ltd"
cert.get_subject().OU = "Dummy Company Ltd"
cert.get_subject().CN = "PowerDNS-Admin"
cert.set_serial_number(1000)
cert.gmtime_adj_notBefore(0)
cert.gmtime_adj_notAfter(10*365*24*60*60)
cert.set_issuer(cert.get_subject())
cert.set_pubkey(k)
cert.sign(k, 'sha256')
cert = x509.CertificateBuilder().subject_name(
subject
).issuer_name(
issuer
).public_key(
key.public_key()
).serial_number(
x509.random_serial_number()
).not_valid_before(
datetime.datetime.utcnow()
).not_valid_after(
datetime.datetime.utcnow() + datetime.timedelta(days=10*365)
).sign(key, hashes.SHA256())
# Write our certificate out to disk.
with open(CERT_FILE, "wb") as cert_file:
cert_file.write(cert.public_bytes(serialization.Encoding.PEM))
open(CERT_FILE, "bw").write(
crypto.dump_certificate(crypto.FILETYPE_PEM, cert))
open(KEY_FILE, "bw").write(
crypto.dump_privatekey(crypto.FILETYPE_PEM, k))

View File

@ -21,7 +21,7 @@ class StructuredException(Exception):
class DomainNotExists(StructuredException):
status_code = 404
def __init__(self, name=None, message="Zone does not exist"):
def __init__(self, name=None, message="Domain does not exist"):
StructuredException.__init__(self)
self.message = message
self.name = name
@ -30,7 +30,7 @@ class DomainNotExists(StructuredException):
class DomainAlreadyExists(StructuredException):
status_code = 409
def __init__(self, name=None, message="Zone already exists"):
def __init__(self, name=None, message="Domain already exists"):
StructuredException.__init__(self)
self.message = message
self.name = name
@ -39,18 +39,11 @@ class DomainAlreadyExists(StructuredException):
class DomainAccessForbidden(StructuredException):
status_code = 403
def __init__(self, name=None, message="Zone access not allowed"):
def __init__(self, name=None, message="Domain access not allowed"):
StructuredException.__init__(self)
self.message = message
self.name = name
class DomainOverrideForbidden(StructuredException):
status_code = 409
def __init__(self, name=None, message="Zone override of record not allowed"):
StructuredException.__init__(self)
self.message = message
self.name = name
class ApiKeyCreateFail(StructuredException):
status_code = 500
@ -67,8 +60,7 @@ class ApiKeyNotUsable(StructuredException):
def __init__(
self,
name=None,
message=("Api key must have zones or accounts"
" or an administrative role")):
message="Api key must have domains or have administrative role"):
StructuredException.__init__(self)
self.message = message
self.name = name
@ -101,15 +93,6 @@ class AccountCreateFail(StructuredException):
self.name = name
class AccountCreateDuplicate(StructuredException):
status_code = 409
def __init__(self, name=None, message="Creation of account failed"):
StructuredException.__init__(self)
self.message = message
self.name = name
class AccountUpdateFail(StructuredException):
status_code = 500
@ -128,22 +111,6 @@ class AccountDeleteFail(StructuredException):
self.name = name
class AccountNotExists(StructuredException):
status_code = 404
def __init__(self, name=None, message="Account does not exist"):
StructuredException.__init__(self)
self.message = message
self.name = name
class InvalidAccountNameException(StructuredException):
status_code = 400
def __init__(self, name=None, message="The account name is invalid"):
StructuredException.__init__(self)
self.message = message
self.name = name
class UserCreateFail(StructuredException):
status_code = 500
@ -152,13 +119,6 @@ class UserCreateFail(StructuredException):
self.message = message
self.name = name
class UserCreateDuplicate(StructuredException):
status_code = 409
def __init__(self, name=None, message="Creation of user failed"):
StructuredException.__init__(self)
self.message = message
self.name = name
class UserUpdateFail(StructuredException):
status_code = 500
@ -168,13 +128,6 @@ class UserUpdateFail(StructuredException):
self.message = message
self.name = name
class UserUpdateFailEmail(StructuredException):
status_code = 409
def __init__(self, name=None, message="Update of user failed"):
StructuredException.__init__(self)
self.message = message
self.name = name
class UserDeleteFail(StructuredException):
status_code = 500
@ -183,19 +136,3 @@ class UserDeleteFail(StructuredException):
StructuredException.__init__(self)
self.message = message
self.name = name
class RecordTypeNotAllowed(StructuredException):
status_code = 400
def __init__(self, name=None, message="Record type not allowed or does not present"):
StructuredException.__init__(self)
self.message = message
self.name = name
class RecordTTLNotAllowed(StructuredException):
status_code = 400
def __init__(self, name=None, message="Record TTL not allowed or does not present"):
StructuredException.__init__(self)
self.message = message
self.name = name

View File

@ -14,9 +14,9 @@ def forward_request():
msg_str = "Sending request to powerdns API {0}"
if request.method != 'GET' and request.method != 'DELETE':
msg = msg_str.format(request.get_json(force=True, silent=True))
msg = msg_str.format(request.get_json(force=True))
current_app.logger.debug(msg)
data = request.get_json(force=True, silent=True)
data = request.get_json(force=True)
verify = False

View File

@ -11,21 +11,10 @@ class RoleSchema(Schema):
name = fields.String()
class AccountSummarySchema(Schema):
id = fields.Integer()
name = fields.String()
domains = fields.Embed(schema=DomainSchema, many=True)
class ApiKeySummarySchema(Schema):
id = fields.Integer()
description = fields.String()
class ApiKeySchema(Schema):
id = fields.Integer()
role = fields.Embed(schema=RoleSchema)
domains = fields.Embed(schema=DomainSchema, many=True)
accounts = fields.Embed(schema=AccountSummarySchema, many=True)
description = fields.String()
key = fields.String()
@ -34,7 +23,6 @@ class ApiPlainKeySchema(Schema):
id = fields.Integer()
role = fields.Embed(schema=RoleSchema)
domains = fields.Embed(schema=DomainSchema, many=True)
accounts = fields.Embed(schema=AccountSummarySchema, many=True)
description = fields.String()
plain_key = fields.String()
@ -47,14 +35,6 @@ class UserSchema(Schema):
email = fields.String()
role = fields.Embed(schema=RoleSchema)
class UserDetailedSchema(Schema):
id = fields.Integer()
username = fields.String()
firstname = fields.String()
lastname = fields.String()
email = fields.String()
role = fields.Embed(schema=RoleSchema)
accounts = fields.Embed(schema=AccountSummarySchema, many=True)
class AccountSchema(Schema):
id = fields.Integer()
@ -63,4 +43,3 @@ class AccountSchema(Schema):
contact = fields.String()
mail = fields.String()
domains = fields.Embed(schema=DomainSchema, many=True)
apikeys = fields.Embed(schema=ApiKeySummarySchema, many=True)

View File

@ -1,638 +0,0 @@
import os
from pathlib import Path
basedir = os.path.abspath(Path(os.path.dirname(__file__)).parent)
class AppSettings(object):
defaults = {
# Flask Settings
'bind_address': '0.0.0.0',
'csrf_cookie_secure': False,
'log_level': 'WARNING',
'port': 9191,
'salt': '$2b$12$yLUMTIfl21FKJQpTkRQXCu',
'secret_key': 'e951e5a1f4b94151b360f47edf596dd2',
'session_cookie_secure': False,
'session_type': 'sqlalchemy',
'sqlalchemy_track_modifications': True,
'sqlalchemy_database_uri': 'sqlite:///' + os.path.join(basedir, 'pdns.db'),
'sqlalchemy_engine_options': {},
# General Settings
'captcha_enable': True,
'captcha_height': 60,
'captcha_length': 6,
'captcha_session_key': 'captcha_image',
'captcha_width': 160,
'mail_server': 'localhost',
'mail_port': 25,
'mail_debug': False,
'mail_use_ssl': False,
'mail_use_tls': False,
'mail_username': '',
'mail_password': '',
'mail_default_sender': '',
'remote_user_enabled': False,
'remote_user_cookies': [],
'remote_user_logout_url': '',
'hsts_enabled': False,
'server_external_ssl': True,
'maintenance': False,
'fullscreen_layout': True,
'record_helper': True,
'login_ldap_first': True,
'default_record_table_size': 15,
'default_domain_table_size': 10,
'auto_ptr': False,
'record_quick_edit': True,
'pretty_ipv6_ptr': False,
'dnssec_admins_only': False,
'allow_user_create_domain': False,
'allow_user_remove_domain': False,
'allow_user_view_history': False,
'custom_history_header': '',
'delete_sso_accounts': False,
'bg_domain_updates': False,
'enable_api_rr_history': True,
'preserve_history': False,
'site_name': 'PowerDNS-Admin',
'site_url': 'http://localhost:9191',
'session_timeout': 10,
'warn_session_timeout': True,
'pdns_api_url': '',
'pdns_api_key': '',
'pdns_api_timeout': 30,
'pdns_version': '4.1.1',
'verify_ssl_connections': True,
'verify_user_email': False,
'enforce_api_ttl': False,
'ttl_options': '1 minute,5 minutes,30 minutes,60 minutes,24 hours',
'otp_field_enabled': True,
'custom_css': '',
'otp_force': False,
'max_history_records': 1000,
'deny_domain_override': False,
'account_name_extra_chars': False,
'gravatar_enabled': False,
'pdns_admin_log_level': 'WARNING',
# Local Authentication Settings
'local_db_enabled': True,
'signup_enabled': True,
'pwd_enforce_characters': False,
'pwd_min_len': 10,
'pwd_min_lowercase': 3,
'pwd_min_uppercase': 2,
'pwd_min_digits': 2,
'pwd_min_special': 1,
'pwd_enforce_complexity': False,
'pwd_min_complexity': 11,
# LDAP Authentication Settings
'ldap_enabled': False,
'ldap_type': 'ldap',
'ldap_uri': '',
'ldap_base_dn': '',
'ldap_admin_username': '',
'ldap_admin_password': '',
'ldap_domain': '',
'ldap_filter_basic': '',
'ldap_filter_username': '',
'ldap_filter_group': '',
'ldap_filter_groupname': '',
'ldap_sg_enabled': False,
'ldap_admin_group': '',
'ldap_operator_group': '',
'ldap_user_group': '',
'autoprovisioning': False,
'autoprovisioning_attribute': '',
'urn_value': '',
'purge': False,
# Google OAuth Settings
'google_oauth_enabled': False,
'google_oauth_client_id': '',
'google_oauth_client_secret': '',
'google_oauth_scope': 'openid email profile',
'google_base_url': 'https://www.googleapis.com/oauth2/v3/',
'google_oauth_auto_configure': True,
'google_oauth_metadata_url': 'https://accounts.google.com/.well-known/openid-configuration',
'google_token_url': 'https://oauth2.googleapis.com/token',
'google_authorize_url': 'https://accounts.google.com/o/oauth2/v2/auth',
# GitHub OAuth Settings
'github_oauth_enabled': False,
'github_oauth_key': '',
'github_oauth_secret': '',
'github_oauth_scope': 'email',
'github_oauth_api_url': 'https://api.github.com/user',
'github_oauth_auto_configure': False,
'github_oauth_metadata_url': '',
'github_oauth_token_url': 'https://github.com/login/oauth/access_token',
'github_oauth_authorize_url': 'https://github.com/login/oauth/authorize',
# Azure OAuth Settings
'azure_oauth_enabled': False,
'azure_oauth_key': '',
'azure_oauth_secret': '',
'azure_oauth_scope': 'User.Read openid email profile',
'azure_oauth_api_url': 'https://graph.microsoft.com/v1.0/',
'azure_oauth_auto_configure': True,
'azure_oauth_metadata_url': '',
'azure_oauth_token_url': '',
'azure_oauth_authorize_url': '',
'azure_sg_enabled': False,
'azure_admin_group': '',
'azure_operator_group': '',
'azure_user_group': '',
'azure_group_accounts_enabled': False,
'azure_group_accounts_name': 'displayName',
'azure_group_accounts_name_re': '',
'azure_group_accounts_description': 'description',
'azure_group_accounts_description_re': '',
# OIDC OAuth Settings
'oidc_oauth_enabled': False,
'oidc_oauth_key': '',
'oidc_oauth_secret': '',
'oidc_oauth_scope': 'email',
'oidc_oauth_api_url': '',
'oidc_oauth_auto_configure': True,
'oidc_oauth_metadata_url': '',
'oidc_oauth_token_url': '',
'oidc_oauth_authorize_url': '',
'oidc_oauth_logout_url': '',
'oidc_oauth_username': 'preferred_username',
'oidc_oauth_email': 'email',
'oidc_oauth_firstname': 'given_name',
'oidc_oauth_last_name': 'family_name',
'oidc_oauth_account_name_property': '',
'oidc_oauth_account_description_property': '',
# SAML Authentication Settings
'saml_enabled': False,
'saml_debug': False,
'saml_path': os.path.join(basedir, 'saml'),
'saml_metadata_url': None,
'saml_metadata_cache_lifetime': 1,
'saml_idp_sso_binding': 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect',
'saml_idp_entity_id': None,
'saml_nameid_format': None,
'saml_attribute_account': None,
'saml_attribute_email': 'email',
'saml_attribute_givenname': 'givenname',
'saml_attribute_surname': 'surname',
'saml_attribute_name': None,
'saml_attribute_username': None,
'saml_attribute_admin': None,
'saml_attribute_group': None,
'saml_group_admin_name': None,
'saml_group_operator_name': None,
'saml_group_to_account_mapping': None,
'saml_sp_entity_id': None,
'saml_sp_contact_name': None,
'saml_sp_contact_mail': None,
'saml_sign_request': False,
'saml_want_message_signed': True,
'saml_logout': True,
'saml_logout_url': None,
'saml_assertion_encrypted': True,
'saml_cert': None,
'saml_key': None,
# Zone Record Settings
'forward_records_allow_edit': {
'A': True,
'AAAA': True,
'AFSDB': False,
'ALIAS': False,
'CAA': True,
'CERT': False,
'CDNSKEY': False,
'CDS': False,
'CNAME': True,
'DNSKEY': False,
'DNAME': False,
'DS': False,
'HINFO': False,
'KEY': False,
'LOC': True,
'LUA': False,
'MX': True,
'NAPTR': False,
'NS': True,
'NSEC': False,
'NSEC3': False,
'NSEC3PARAM': False,
'OPENPGPKEY': False,
'PTR': True,
'RP': False,
'RRSIG': False,
'SOA': False,
'SPF': True,
'SSHFP': False,
'SRV': True,
'TKEY': False,
'TSIG': False,
'TLSA': False,
'SMIMEA': False,
'TXT': True,
'URI': False
},
'reverse_records_allow_edit': {
'A': False,
'AAAA': False,
'AFSDB': False,
'ALIAS': False,
'CAA': False,
'CERT': False,
'CDNSKEY': False,
'CDS': False,
'CNAME': False,
'DNSKEY': False,
'DNAME': False,
'DS': False,
'HINFO': False,
'KEY': False,
'LOC': True,
'LUA': False,
'MX': False,
'NAPTR': False,
'NS': True,
'NSEC': False,
'NSEC3': False,
'NSEC3PARAM': False,
'OPENPGPKEY': False,
'PTR': True,
'RP': False,
'RRSIG': False,
'SOA': False,
'SPF': False,
'SSHFP': False,
'SRV': False,
'TKEY': False,
'TSIG': False,
'TLSA': False,
'SMIMEA': False,
'TXT': True,
'URI': False
},
}
types = {
# Flask Settings
'bind_address': str,
'csrf_cookie_secure': bool,
'log_level': str,
'port': int,
'salt': str,
'secret_key': str,
'session_cookie_secure': bool,
'session_type': str,
'sqlalchemy_track_modifications': bool,
'sqlalchemy_database_uri': str,
'sqlalchemy_engine_options': dict,
# General Settings
'captcha_enable': bool,
'captcha_height': int,
'captcha_length': int,
'captcha_session_key': str,
'captcha_width': int,
'mail_server': str,
'mail_port': int,
'mail_debug': bool,
'mail_use_ssl': bool,
'mail_use_tls': bool,
'mail_username': str,
'mail_password': str,
'mail_default_sender': str,
'hsts_enabled': bool,
'remote_user_enabled': bool,
'remote_user_cookies': list,
'remote_user_logout_url': str,
'maintenance': bool,
'fullscreen_layout': bool,
'record_helper': bool,
'login_ldap_first': bool,
'default_record_table_size': int,
'default_domain_table_size': int,
'auto_ptr': bool,
'record_quick_edit': bool,
'pretty_ipv6_ptr': bool,
'dnssec_admins_only': bool,
'allow_user_create_domain': bool,
'allow_user_remove_domain': bool,
'allow_user_view_history': bool,
'custom_history_header': str,
'delete_sso_accounts': bool,
'bg_domain_updates': bool,
'enable_api_rr_history': bool,
'preserve_history': bool,
'site_name': str,
'site_url': str,
'session_timeout': int,
'warn_session_timeout': bool,
'pdns_api_url': str,
'pdns_api_key': str,
'pdns_api_timeout': int,
'pdns_version': str,
'verify_ssl_connections': bool,
'verify_user_email': bool,
'enforce_api_ttl': bool,
'ttl_options': str,
'otp_field_enabled': bool,
'custom_css': str,
'otp_force': bool,
'max_history_records': int,
'deny_domain_override': bool,
'account_name_extra_chars': bool,
'gravatar_enabled': bool,
'pdns_admin_log_level': str,
'forward_records_allow_edit': dict,
'reverse_records_allow_edit': dict,
# Local Authentication Settings
'local_db_enabled': bool,
'signup_enabled': bool,
'pwd_enforce_characters': bool,
'pwd_min_len': int,
'pwd_min_lowercase': int,
'pwd_min_uppercase': int,
'pwd_min_digits': int,
'pwd_min_special': int,
'pwd_enforce_complexity': bool,
'pwd_min_complexity': int,
# LDAP Authentication Settings
'ldap_enabled': bool,
'ldap_type': str,
'ldap_uri': str,
'ldap_base_dn': str,
'ldap_admin_username': str,
'ldap_admin_password': str,
'ldap_domain': str,
'ldap_filter_basic': str,
'ldap_filter_username': str,
'ldap_filter_group': str,
'ldap_filter_groupname': str,
'ldap_sg_enabled': bool,
'ldap_admin_group': str,
'ldap_operator_group': str,
'ldap_user_group': str,
'autoprovisioning': bool,
'autoprovisioning_attribute': str,
'urn_value': str,
'purge': bool,
# Google OAuth Settings
'google_oauth_enabled': bool,
'google_oauth_client_id': str,
'google_oauth_client_secret': str,
'google_oauth_scope': str,
'google_base_url': str,
'google_oauth_auto_configure': bool,
'google_oauth_metadata_url': str,
'google_token_url': str,
'google_authorize_url': str,
# GitHub OAuth Settings
'github_oauth_enabled': bool,
'github_oauth_key': str,
'github_oauth_secret': str,
'github_oauth_scope': str,
'github_oauth_api_url': str,
'github_oauth_auto_configure': bool,
'github_oauth_metadata_url': str,
'github_oauth_token_url': str,
'github_oauth_authorize_url': str,
# Azure OAuth Settings
'azure_oauth_enabled': bool,
'azure_oauth_key': str,
'azure_oauth_secret': str,
'azure_oauth_scope': str,
'azure_oauth_api_url': str,
'azure_oauth_auto_configure': bool,
'azure_oauth_metadata_url': str,
'azure_oauth_token_url': str,
'azure_oauth_authorize_url': str,
'azure_sg_enabled': bool,
'azure_admin_group': str,
'azure_operator_group': str,
'azure_user_group': str,
'azure_group_accounts_enabled': bool,
'azure_group_accounts_name': str,
'azure_group_accounts_name_re': str,
'azure_group_accounts_description': str,
'azure_group_accounts_description_re': str,
# OIDC OAuth Settings
'oidc_oauth_enabled': bool,
'oidc_oauth_key': str,
'oidc_oauth_secret': str,
'oidc_oauth_scope': str,
'oidc_oauth_api_url': str,
'oidc_oauth_auto_configure': bool,
'oidc_oauth_metadata_url': str,
'oidc_oauth_token_url': str,
'oidc_oauth_authorize_url': str,
'oidc_oauth_logout_url': str,
'oidc_oauth_username': str,
'oidc_oauth_email': str,
'oidc_oauth_firstname': str,
'oidc_oauth_last_name': str,
'oidc_oauth_account_name_property': str,
'oidc_oauth_account_description_property': str,
# SAML Authentication Settings
'saml_enabled': bool,
'saml_debug': bool,
'saml_path': str,
'saml_metadata_url': str,
'saml_metadata_cache_lifetime': int,
'saml_idp_sso_binding': str,
'saml_idp_entity_id': str,
'saml_nameid_format': str,
'saml_attribute_account': str,
'saml_attribute_email': str,
'saml_attribute_givenname': str,
'saml_attribute_surname': str,
'saml_attribute_name': str,
'saml_attribute_username': str,
'saml_attribute_admin': str,
'saml_attribute_group': str,
'saml_group_admin_name': str,
'saml_group_operator_name': str,
'saml_group_to_account_mapping': str,
'saml_sp_entity_id': str,
'saml_sp_contact_name': str,
'saml_sp_contact_mail': str,
'saml_sign_request': bool,
'saml_want_message_signed': bool,
'saml_logout': bool,
'saml_logout_url': str,
'saml_assertion_encrypted': bool,
'saml_cert': str,
'saml_key': str,
}
groups = {
'authentication': [
# Local Authentication Settings
'local_db_enabled',
'signup_enabled',
'pwd_enforce_characters',
'pwd_min_len',
'pwd_min_lowercase',
'pwd_min_uppercase',
'pwd_min_digits',
'pwd_min_special',
'pwd_enforce_complexity',
'pwd_min_complexity',
# LDAP Authentication Settings
'ldap_enabled',
'ldap_type',
'ldap_uri',
'ldap_base_dn',
'ldap_admin_username',
'ldap_admin_password',
'ldap_domain',
'ldap_filter_basic',
'ldap_filter_username',
'ldap_filter_group',
'ldap_filter_groupname',
'ldap_sg_enabled',
'ldap_admin_group',
'ldap_operator_group',
'ldap_user_group',
'autoprovisioning',
'autoprovisioning_attribute',
'urn_value',
'purge',
# Google OAuth Settings
'google_oauth_enabled',
'google_oauth_client_id',
'google_oauth_client_secret',
'google_oauth_scope',
'google_base_url',
'google_oauth_auto_configure',
'google_oauth_metadata_url',
'google_token_url',
'google_authorize_url',
# GitHub OAuth Settings
'github_oauth_enabled',
'github_oauth_key',
'github_oauth_secret',
'github_oauth_scope',
'github_oauth_api_url',
'github_oauth_auto_configure',
'github_oauth_metadata_url',
'github_oauth_token_url',
'github_oauth_authorize_url',
# Azure OAuth Settings
'azure_oauth_enabled',
'azure_oauth_key',
'azure_oauth_secret',
'azure_oauth_scope',
'azure_oauth_api_url',
'azure_oauth_auto_configure',
'azure_oauth_metadata_url',
'azure_oauth_token_url',
'azure_oauth_authorize_url',
'azure_sg_enabled',
'azure_admin_group',
'azure_operator_group',
'azure_user_group',
'azure_group_accounts_enabled',
'azure_group_accounts_name',
'azure_group_accounts_name_re',
'azure_group_accounts_description',
'azure_group_accounts_description_re',
# OIDC OAuth Settings
'oidc_oauth_enabled',
'oidc_oauth_key',
'oidc_oauth_secret',
'oidc_oauth_scope',
'oidc_oauth_api_url',
'oidc_oauth_auto_configure',
'oidc_oauth_metadata_url',
'oidc_oauth_token_url',
'oidc_oauth_authorize_url',
'oidc_oauth_logout_url',
'oidc_oauth_username',
'oidc_oauth_email',
'oidc_oauth_firstname',
'oidc_oauth_last_name',
'oidc_oauth_account_name_property',
'oidc_oauth_account_description_property',
]
}
@staticmethod
def convert_type(name, value):
import json
from json import JSONDecodeError
if name in AppSettings.types:
var_type = AppSettings.types[name]
# Handle boolean values
if var_type == bool and isinstance(value, str):
if value.lower() in ['True', 'true', '1'] or value is True:
return True
else:
return False
# Handle float values
if var_type == float:
return float(value)
# Handle integer values
if var_type == int:
return int(value)
if (var_type == dict or var_type == list) and isinstance(value, str) and len(value) > 0:
try:
return json.loads(value)
except JSONDecodeError as e:
# Provide backwards compatibility for legacy non-JSON format
value = value.replace("'", '"').replace('True', 'true').replace('False', 'false')
try:
return json.loads(value)
except JSONDecodeError as e:
raise ValueError('Cannot parse json {} for variable {}'.format(value, name))
if var_type == str:
return str(value)
return value
@staticmethod
def load_environment(app):
""" Load app settings from environment variables when defined. """
import os
for var_name, default_value in AppSettings.defaults.items():
env_name = var_name.upper()
current_value = None
if env_name + '_FILE' in os.environ:
if env_name in os.environ:
raise AttributeError(
"Both {} and {} are set but are exclusive.".format(
env_name, env_name + '_FILE'))
with open(os.environ[env_name + '_FILE']) as f:
current_value = f.read()
f.close()
elif env_name in os.environ:
current_value = os.environ[env_name]
if current_value is not None:
app.config[env_name] = AppSettings.convert_type(var_name, current_value)

View File

@ -2,8 +2,8 @@ import logging
import re
import json
import requests
import hashlib
import ipaddress
import idna
from collections.abc import Iterable
from distutils.version import StrictVersion
@ -103,13 +103,6 @@ def fetch_json(remote_url,
data = None
try:
data = json.loads(r.content.decode('utf-8'))
except UnicodeDecodeError:
# If the decoding fails, switch to slower but probably working .json()
try:
logging.warning("UTF-8 content.decode failed, switching to slower .json method")
data = r.json()
except Exception as e:
raise e
except Exception as e:
raise RuntimeError(
'Error while loading JSON data from {0}'.format(remote_url)) from e
@ -132,16 +125,6 @@ def display_master_name(data):
return ", ".join(matches)
def format_zone_type(data):
"""Formats the given zone type for modern social standards."""
data = str(data).lower()
if data == 'master':
data = 'primary'
elif data == 'slave':
data = 'secondary'
return data.title()
def display_time(amount, units='s', remove_seconds=True):
"""
Convert timestamp to normal time format
@ -194,6 +177,17 @@ def pdns_api_extended_uri(version):
return ""
def email_to_gravatar_url(email="", size=100):
"""
AD doesn't necessarily have email
"""
if email is None:
email = ""
hash_string = hashlib.md5(email.encode('utf-8')).hexdigest()
return "https://s.gravatar.com/avatar/{0}?s={1}".format(hash_string, size)
def display_setting_state(value):
if value == 1:
return "ON"
@ -227,49 +221,10 @@ def ensure_list(l):
yield from l
def pretty_domain_name(domain_name):
# Add a debugging statement to print out the domain name
print("Received zone name:", domain_name)
# Check if the domain name is encoded using Punycode
if domain_name.endswith('.xn--'):
try:
# Decode the domain name using the idna library
domain_name = idna.decode(domain_name)
except Exception as e:
# If the decoding fails, raise an exception with more information
raise Exception('Cannot decode IDN zone: {}'.format(e))
# Return the "pretty" version of the zone name
return domain_name
def to_idna(value, action):
splits = value.split('.')
result = []
if action == 'encode':
for split in splits:
try:
# Try encoding to idna
if not split.startswith('_') and not split.startswith('-'):
result.append(idna.encode(split).decode())
else:
result.append(split)
except idna.IDNAError:
result.append(split)
elif action == 'decode':
for split in splits:
if not split.startswith('_') and not split.startswith('--'):
result.append(idna.decode(split))
else:
result.append(split)
else:
raise Exception('No valid action received')
return '.'.join(result)
def format_datetime(value, format_str="%Y-%m-%d %I:%M %p"):
"""Format a date time to (Default): YYYY-MM-DD HH:MM P"""
if value is None:
return ""
return value.strftime(format_str)
class customBoxes:
boxes = {
"reverse": (" ", " "),
"ip6arpa": ("ip6", "%.ip6.arpa"),
"inaddrarpa": ("in-addr", "%.in-addr.arpa")
}
order = ["reverse", "ip6arpa", "inaddrarpa"]

View File

@ -8,7 +8,6 @@ from .account_user import AccountUser
from .server import Server
from .history import History
from .api_key import ApiKey
from .api_key_account import ApiKeyAccount
from .setting import Setting
from .domain import Domain
from .domain_setting import DomainSetting

View File

@ -3,7 +3,6 @@ from flask import current_app
from urllib.parse import urljoin
from ..lib import utils
from ..lib.errors import InvalidAccountNameException
from .base import db
from .setting import Setting
from .user import User
@ -18,12 +17,9 @@ class Account(db.Model):
contact = db.Column(db.String(128))
mail = db.Column(db.String(128))
domains = db.relationship("Domain", back_populates="account")
apikeys = db.relationship("ApiKey",
secondary="apikey_account",
back_populates="accounts")
def __init__(self, name=None, description=None, contact=None, mail=None):
self.name = Account.sanitize_name(name) if name is not None else name
self.name = name
self.description = description
self.contact = contact
self.mail = mail
@ -34,30 +30,9 @@ class Account(db.Model):
self.PDNS_VERSION = Setting().get('pdns_version')
self.API_EXTENDED_URL = utils.pdns_api_extended_uri(self.PDNS_VERSION)
@staticmethod
def sanitize_name(name):
"""
Formats the provided name to fit into the constraint
"""
if not isinstance(name, str):
raise InvalidAccountNameException("Account name must be a string")
allowed_characters = "abcdefghijklmnopqrstuvwxyz0123456789"
if Setting().get('account_name_extra_chars'):
allowed_characters += "_-."
sanitized_name = ''.join(c for c in name.lower() if c in allowed_characters)
if len(sanitized_name) > Account.name.type.length:
current_app.logger.error("Account name {0} too long. Truncated to: {1}".format(
sanitized_name, sanitized_name[:Account.name.type.length]))
if not sanitized_name:
raise InvalidAccountNameException("Empty string is not a valid account name")
return sanitized_name[:Account.name.type.length]
if self.name is not None:
self.name = ''.join(c for c in self.name.lower()
if c in "abcdefghijklmnopqrstuvwxyz0123456789")
def __repr__(self):
return '<Account {0}r>'.format(self.name)
@ -90,9 +65,11 @@ class Account(db.Model):
"""
Create a new account
"""
self.name = Account.sanitize_name(self.name)
# Sanity check - account name
if self.name == "":
return {'status': False, 'msg': 'No account name specified'}
# Check that account name is not already used
# check that account name is not already used
account = Account.query.filter(Account.name == self.name).first()
if account:
return {'status': False, 'msg': 'Account already exists'}

View File

@ -1,12 +1,12 @@
import secrets
import random
import string
import bcrypt
from flask import current_app
from .base import db
from .base import db, domain_apikey
from ..models.role import Role
from ..models.domain import Domain
from ..models.account import Account
class ApiKey(db.Model):
__tablename__ = "apikey"
@ -16,21 +16,17 @@ class ApiKey(db.Model):
role_id = db.Column(db.Integer, db.ForeignKey('role.id'))
role = db.relationship('Role', back_populates="apikeys", lazy=True)
domains = db.relationship("Domain",
secondary="domain_apikey",
secondary=domain_apikey,
back_populates="apikeys")
accounts = db.relationship("Account",
secondary="apikey_account",
back_populates="apikeys")
def __init__(self, key=None, desc=None, role_name=None, domains=[], accounts=[]):
def __init__(self, key=None, desc=None, role_name=None, domains=[]):
self.id = None
self.description = desc
self.role_name = role_name
self.domains[:] = domains
self.accounts[:] = accounts
if not key:
rand_key = ''.join(
secrets.choice(string.ascii_letters + string.digits)
random.choice(string.ascii_letters + string.digits)
for _ in range(15))
self.plain_key = rand_key
self.key = self.get_hashed_password(rand_key).decode('utf-8')
@ -58,33 +54,27 @@ class ApiKey(db.Model):
db.session.rollback()
raise e
def update(self, role_name=None, description=None, domains=None, accounts=None):
def update(self, role_name=None, description=None, domains=None):
try:
if role_name:
role = Role.query.filter(Role.name == role_name).first()
self.role_id = role.id
if role_name:
role = Role.query.filter(Role.name == role_name).first()
self.role_id = role.id
if description:
self.description = description
if description:
self.description = description
if domains is not None:
domain_object_list = Domain.query \
.filter(Domain.name.in_(domains)) \
.all()
self.domains[:] = domain_object_list
if domains:
domain_object_list = Domain.query \
.filter(Domain.name.in_(domains)) \
.all()
self.domains[:] = domain_object_list
if accounts is not None:
account_object_list = Account.query \
.filter(Account.name.in_(accounts)) \
.all()
self.accounts[:] = account_object_list
db.session.commit()
db.session.commit()
except Exception as e:
msg_str = 'Update of apikey failed. Error: {0}'
current_app.logger.error(msg_str.format(e))
db.session.rollback() # fixed line
raise e
msg_str = 'Update of apikey failed. Error: {0}'
current_app.logger.error(msg_str.format(e))
db.session.rollback
raise e
def get_hashed_password(self, plain_text_password=None):
# Hash a password for the first time
@ -97,15 +87,6 @@ class ApiKey(db.Model):
else:
pw = self.plain_text_password
# The salt value is currently re-used here intentionally because
# the implementation relies on just the API key's value itself
# for database lookup: ApiKey.is_validate() would have no way of
# discerning whether any given key is valid if bcrypt.gensalt()
# was used. As far as is known, this is fine as long as the
# value of new API keys is randomly generated in a
# cryptographically secure fashion, as this then makes
# expendable as an exception the otherwise vital protection of
# proper salting as provided by bcrypt.gensalt().
return bcrypt.hashpw(pw.encode('utf-8'),
current_app.config.get('SALT').encode('utf-8'))
@ -131,12 +112,3 @@ class ApiKey(db.Model):
raise Exception("Unauthorized")
return apikey
def associate_account(self, account):
return True
def dissociate_account(self, account):
return True
def get_accounts(self):
return True

View File

@ -1,20 +0,0 @@
from .base import db
class ApiKeyAccount(db.Model):
__tablename__ = 'apikey_account'
id = db.Column(db.Integer, primary_key=True)
apikey_id = db.Column(db.Integer,
db.ForeignKey('apikey.id'),
nullable=False)
account_id = db.Column(db.Integer,
db.ForeignKey('account.id'),
nullable=False)
db.UniqueConstraint('apikey_id', 'account_id', name='uniq_apikey_account')
def __init__(self, apikey_id, account_id):
self.apikey_id = apikey_id
self.account_id = account_id
def __repr__(self):
return '<ApiKey_Account {0} {1}>'.format(self.apikey_id, self.account_id)

View File

@ -1,8 +1,6 @@
import json
import re
import traceback
from flask import current_app
from flask_login import current_user
from urllib.parse import urljoin
from distutils.util import strtobool
@ -21,7 +19,7 @@ class Domain(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(255), index=True, unique=True)
master = db.Column(db.String(128))
type = db.Column(db.String(8), nullable=False)
type = db.Column(db.String(6), nullable=False)
serial = db.Column(db.BigInteger)
notified_serial = db.Column(db.BigInteger)
last_check = db.Column(db.Integer)
@ -68,13 +66,13 @@ class Domain(db.Model):
return True
except Exception as e:
current_app.logger.error(
'Can not create setting {0} for zone {1}. {2}'.format(
'Can not create setting {0} for domain {1}. {2}'.format(
setting, self.name, e))
return False
def get_domain_info(self, domain_name):
"""
Get all zones which has in PowerDNS
Get all domains which has in PowerDNS
"""
headers = {'X-API-Key': self.PDNS_API_KEY}
jdata = utils.fetch_json(urljoin(
@ -88,7 +86,7 @@ class Domain(db.Model):
def get_domains(self):
"""
Get all zones which has in PowerDNS
Get all domains which has in PowerDNS
"""
headers = {'X-API-Key': self.PDNS_API_KEY}
jdata = utils.fetch_json(
@ -108,33 +106,17 @@ class Domain(db.Model):
return domain.id
except Exception as e:
current_app.logger.error(
'Zone does not exist. ERROR: {0}'.format(e))
'Domain does not exist. ERROR: {0}'.format(e))
return None
def search_idn_domains(self, search_string):
"""
Search for IDN zones using the provided search string.
"""
# Compile the regular expression pattern for matching IDN zone names
idn_pattern = re.compile(r'^xn--')
# Search for zone names that match the IDN pattern
idn_domains = [
domain for domain in self.get_domains() if idn_pattern.match(domain)
]
# Filter the search results based on the provided search string
return [domain for domain in idn_domains if search_string in domain]
def update(self):
"""
Fetch zones (zones) from PowerDNS and update into DB
Fetch zones (domains) from PowerDNS and update into DB
"""
db_domain = Domain.query.all()
list_db_domain = [d.name for d in db_domain]
dict_db_domain = dict((x.name, x) for x in db_domain)
current_app.logger.info("Found {} zones in PowerDNS-Admin".format(
current_app.logger.info("Found {} domains in PowerDNS-Admin".format(
len(list_db_domain)))
headers = {'X-API-Key': self.PDNS_API_KEY}
try:
@ -149,31 +131,20 @@ class Domain(db.Model):
"Found {} zones in PowerDNS server".format(len(list_jdomain)))
try:
# zones should remove from db since it doesn't exist in powerdns anymore
# domains should remove from db since it doesn't exist in powerdns anymore
should_removed_db_domain = list(
set(list_db_domain).difference(list_jdomain))
for domain_name in should_removed_db_domain:
self.delete_domain_from_pdnsadmin(domain_name, do_commit=False)
except Exception as e:
current_app.logger.error(
'Can not delete zone from DB. DETAIL: {0}'.format(e))
'Can not delete domain from DB. DETAIL: {0}'.format(e))
current_app.logger.debug(traceback.format_exc())
# update/add new zone
account_cache = {}
# update/add new domain
for data in jdata:
if 'account' in data:
# if no account is set don't try to query db
if data['account'] == '':
find_account_id = None
else:
find_account_id = account_cache.get(data['account'])
# if account was not queried in the past and hence not in cache
if find_account_id is None:
find_account_id = Account().get_id_by_name(data['account'])
# add to cache
account_cache[data['account']] = find_account_id
account_id = find_account_id
account_id = Account().get_id_by_name(data['account'])
else:
current_app.logger.debug(
"No 'account' data found in API result - Unsupported PowerDNS version?"
@ -187,16 +158,16 @@ class Domain(db.Model):
self.add_domain_to_powerdns_admin(domain=data, do_commit=False)
db.session.commit()
current_app.logger.info('Update zone finished')
current_app.logger.info('Update domain finished')
return {
'status': 'ok',
'msg': 'Zone table has been updated successfully'
'msg': 'Domain table has been updated successfully'
}
except Exception as e:
db.session.rollback()
current_app.logger.error(
'Cannot update zone table. Error: {0}'.format(e))
return {'status': 'error', 'msg': 'Cannot update zone table'}
'Cannot update domain table. Error: {0}'.format(e))
return {'status': 'error', 'msg': 'Cannot update domain table'}
def update_pdns_admin_domain(self, domain, account_id, data, do_commit=True):
# existing domain, only update if something actually has changed
@ -218,11 +189,11 @@ class Domain(db.Model):
try:
if do_commit:
db.session.commit()
current_app.logger.info("Updated PDNS-Admin zone {0}".format(
current_app.logger.info("Updated PDNS-Admin domain {0}".format(
domain.name))
except Exception as e:
db.session.rollback()
current_app.logger.info("Rolled back zone {0} {1}".format(
current_app.logger.info("Rolled back Domain {0} {1}".format(
domain.name, e))
raise
@ -234,10 +205,10 @@ class Domain(db.Model):
domain_master_ips=[],
account_name=None):
"""
Add a zone to power dns
Add a domain to power dns
"""
headers = {'X-API-Key': self.PDNS_API_KEY, 'Content-Type': 'application/json'}
headers = {'X-API-Key': self.PDNS_API_KEY}
domain_name = domain_name + '.'
domain_ns = [ns + '.' for ns in domain_ns]
@ -269,23 +240,23 @@ class Domain(db.Model):
if 'error' in jdata.keys():
current_app.logger.error(jdata['error'])
if jdata.get('http_code') == 409:
return {'status': 'error', 'msg': 'Zone already exists'}
return {'status': 'error', 'msg': 'Domain already exists'}
return {'status': 'error', 'msg': jdata['error']}
else:
current_app.logger.info(
'Added zone successfully to PowerDNS: {0}'.format(
'Added domain successfully to PowerDNS: {0}'.format(
domain_name))
self.add_domain_to_powerdns_admin(domain_dict=post_data)
return {'status': 'ok', 'msg': 'Added zone successfully'}
return {'status': 'ok', 'msg': 'Added domain successfully'}
except Exception as e:
current_app.logger.error('Cannot add zone {0} {1}'.format(
current_app.logger.error('Cannot add domain {0} {1}'.format(
domain_name, e))
current_app.logger.debug(traceback.format_exc())
return {'status': 'error', 'msg': 'Cannot add this zone.'}
return {'status': 'error', 'msg': 'Cannot add this domain.'}
def add_domain_to_powerdns_admin(self, domain=None, domain_dict=None, do_commit=True):
"""
Read zone from PowerDNS and add into PDNS-Admin
Read Domain from PowerDNS and add into PDNS-Admin
"""
headers = {'X-API-Key': self.PDNS_API_KEY}
if not domain:
@ -299,7 +270,7 @@ class Domain(db.Model):
timeout=int(Setting().get('pdns_api_timeout')),
verify=Setting().get('verify_ssl_connections'))
except Exception as e:
current_app.logger.error('Can not read zone from PDNS')
current_app.logger.error('Can not read domain from PDNS')
current_app.logger.error(e)
current_app.logger.debug(traceback.format_exc())
@ -325,22 +296,22 @@ class Domain(db.Model):
if do_commit:
db.session.commit()
current_app.logger.info(
"Synced PowerDNS zone to PDNS-Admin: {0}".format(d.name))
"Synced PowerDNS Domain to PDNS-Admin: {0}".format(d.name))
return {
'status': 'ok',
'msg': 'Added zone successfully to PowerDNS-Admin'
'msg': 'Added Domain successfully to PowerDNS-Admin'
}
except Exception as e:
db.session.rollback()
current_app.logger.info("Rolled back zone {0}".format(d.name))
current_app.logger.info("Rolled back Domain {0}".format(d.name))
raise
def update_soa_setting(self, domain_name, soa_edit_api):
domain = Domain.query.filter(Domain.name == domain_name).first()
if not domain:
return {'status': 'error', 'msg': 'Zone does not exist.'}
return {'status': 'error', 'msg': 'Domain does not exist.'}
headers = {'X-API-Key': self.PDNS_API_KEY, 'Content-Type': 'application/json'}
headers = {'X-API-Key': self.PDNS_API_KEY}
if soa_edit_api not in ["DEFAULT", "INCREASE", "EPOCH", "OFF"]:
soa_edit_api = 'DEFAULT'
@ -365,7 +336,7 @@ class Domain(db.Model):
return {'status': 'error', 'msg': jdata['error']}
else:
current_app.logger.info(
'soa-edit-api changed for zone {0} successfully'.format(
'soa-edit-api changed for domain {0} successfully'.format(
domain_name))
return {
'status': 'ok',
@ -375,11 +346,11 @@ class Domain(db.Model):
current_app.logger.debug(e)
current_app.logger.debug(traceback.format_exc())
current_app.logger.error(
'Cannot change soa-edit-api for zone {0}'.format(
'Cannot change soa-edit-api for domain {0}'.format(
domain_name))
return {
'status': 'error',
'msg': 'Cannot change soa-edit-api for this zone.'
'msg': 'Cannot change soa-edit-api for this domain.'
}
def update_kind(self, domain_name, kind, masters=[]):
@ -388,9 +359,9 @@ class Domain(db.Model):
"""
domain = Domain.query.filter(Domain.name == domain_name).first()
if not domain:
return {'status': 'error', 'msg': 'Znoe does not exist.'}
return {'status': 'error', 'msg': 'Domain does not exist.'}
headers = {'X-API-Key': self.PDNS_API_KEY, 'Content-Type': 'application/json'}
headers = {'X-API-Key': self.PDNS_API_KEY}
post_data = {"kind": kind, "masters": masters}
@ -409,26 +380,26 @@ class Domain(db.Model):
return {'status': 'error', 'msg': jdata['error']}
else:
current_app.logger.info(
'Update zone kind for {0} successfully'.format(
'Update domain kind for {0} successfully'.format(
domain_name))
return {
'status': 'ok',
'msg': 'Zone kind changed successfully'
'msg': 'Domain kind changed successfully'
}
except Exception as e:
current_app.logger.error(
'Cannot update kind for zone {0}. Error: {1}'.format(
'Cannot update kind for domain {0}. Error: {1}'.format(
domain_name, e))
current_app.logger.debug(traceback.format_exc())
return {
'status': 'error',
'msg': 'Cannot update kind for this zone.'
'msg': 'Cannot update kind for this domain.'
}
def create_reverse_domain(self, domain_name, domain_reverse_name):
"""
Check the existing reverse lookup zone,
Check the existing reverse lookup domain,
if not exists create a new one automatically
"""
domain_obj = Domain.query.filter(Domain.name == domain_name).first()
@ -448,9 +419,9 @@ class Domain(db.Model):
result = self.add(domain_reverse_name, 'Master', 'DEFAULT', [], [])
self.update()
if result['status'] == 'ok':
history = History(msg='Add reverse lookup zone {0}'.format(
history = History(msg='Add reverse lookup domain {0}'.format(
domain_reverse_name),
detail=json.dumps({
detail=str({
'domain_type': 'Master',
'domain_master_ips': ''
}),
@ -459,7 +430,7 @@ class Domain(db.Model):
else:
return {
'status': 'error',
'msg': 'Adding reverse lookup zone failed'
'msg': 'Adding reverse lookup domain failed'
}
domain_user_ids = self.get_user()
if len(domain_user_ids) > 0:
@ -469,13 +440,13 @@ class Domain(db.Model):
'status':
'ok',
'msg':
'New reverse lookup zone created with granted privileges'
'New reverse lookup domain created with granted privileges'
}
return {
'status': 'ok',
'msg': 'New reverse lookup zone created without users'
'msg': 'New reverse lookup domain created without users'
}
return {'status': 'ok', 'msg': 'Reverse lookup zone already exists'}
return {'status': 'ok', 'msg': 'Reverse lookup domain already exists'}
def get_reverse_domain_name(self, reverse_host_address):
c = 1
@ -504,22 +475,22 @@ class Domain(db.Model):
def delete(self, domain_name):
"""
Delete a single zone name from powerdns
Delete a single domain name from powerdns
"""
try:
self.delete_domain_from_powerdns(domain_name)
self.delete_domain_from_pdnsadmin(domain_name)
return {'status': 'ok', 'msg': 'Delete zone successfully'}
return {'status': 'ok', 'msg': 'Delete domain successfully'}
except Exception as e:
current_app.logger.error(
'Cannot delete zone {0}'.format(domain_name))
'Cannot delete domain {0}'.format(domain_name))
current_app.logger.error(e)
current_app.logger.debug(traceback.format_exc())
return {'status': 'error', 'msg': 'Cannot delete zone'}
return {'status': 'error', 'msg': 'Cannot delete domain'}
def delete_domain_from_powerdns(self, domain_name):
"""
Delete a single zone name from powerdns
Delete a single domain name from powerdns
"""
headers = {'X-API-Key': self.PDNS_API_KEY}
@ -531,12 +502,12 @@ class Domain(db.Model):
method='DELETE',
verify=Setting().get('verify_ssl_connections'))
current_app.logger.info(
'Deleted zone successfully from PowerDNS: {0}'.format(
'Deleted domain successfully from PowerDNS: {0}'.format(
domain_name))
return {'status': 'ok', 'msg': 'Delete zone successfully'}
return {'status': 'ok', 'msg': 'Delete domain successfully'}
def delete_domain_from_pdnsadmin(self, domain_name, do_commit=True):
# Revoke permission before deleting zone
# Revoke permission before deleting domain
domain = Domain.query.filter(Domain.name == domain_name).first()
domain_user = DomainUser.query.filter(
DomainUser.domain_id == domain.id)
@ -548,25 +519,17 @@ class Domain(db.Model):
domain_setting.delete()
domain.apikeys[:] = []
# Remove history for zone
if not Setting().get('preserve_history'):
domain_history = History.query.filter(
History.domain_id == domain.id
)
if domain_history:
domain_history.delete()
# then remove zone
# then remove domain
Domain.query.filter(Domain.name == domain_name).delete()
if do_commit:
db.session.commit()
current_app.logger.info(
"Deleted zone successfully from pdnsADMIN: {}".format(
"Deleted domain successfully from pdnsADMIN: {}".format(
domain_name))
def get_user(self):
"""
Get users (id) who have access to this zone name
Get users (id) who have access to this domain name
"""
user_ids = []
query = db.session.query(
@ -596,7 +559,7 @@ class Domain(db.Model):
except Exception as e:
db.session.rollback()
current_app.logger.error(
'Cannot revoke user privileges on zone {0}. DETAIL: {1}'.
'Cannot revoke user privileges on domain {0}. DETAIL: {1}'.
format(self.name, e))
current_app.logger.debug(print(traceback.format_exc()))
@ -608,43 +571,14 @@ class Domain(db.Model):
except Exception as e:
db.session.rollback()
current_app.logger.error(
'Cannot grant user privileges to zone {0}. DETAIL: {1}'.
'Cannot grant user privileges to domain {0}. DETAIL: {1}'.
format(self.name, e))
current_app.logger.debug(print(traceback.format_exc()))
def revoke_privileges_by_id(self, user_id):
"""
Remove a single user from privilege list based on user_id
"""
new_uids = [u for u in self.get_user() if u != user_id]
users = []
for uid in new_uids:
users.append(User(id=uid).get_user_info_by_id().username)
self.grant_privileges(users)
def add_user(self, user):
"""
Add a single user to zone by User
"""
try:
du = DomainUser(self.id, user.id)
db.session.add(du)
db.session.commit()
return True
except Exception as e:
db.session.rollback()
current_app.logger.error(
'Cannot add user privileges on zone {0}. DETAIL: {1}'.
format(self.name, e))
return False
def update_from_master(self, domain_name):
"""
Update records from Master DNS server
"""
import urllib.parse
domain = Domain.query.filter(Domain.name == domain_name).first()
if domain:
headers = {'X-API-Key': self.PDNS_API_KEY}
@ -652,7 +586,7 @@ class Domain(db.Model):
r = utils.fetch_json(urljoin(
self.PDNS_STATS_URL, self.API_EXTENDED_URL +
'/servers/localhost/zones/{0}/axfr-retrieve'.format(
urllib.parse.quote_plus(domain.name))),
domain.name)),
headers=headers,
timeout=int(
Setting().get('pdns_api_timeout')),
@ -669,14 +603,12 @@ class Domain(db.Model):
'There was something wrong, please contact administrator'
}
else:
return {'status': 'error', 'msg': 'This zone does not exist'}
return {'status': 'error', 'msg': 'This domain does not exist'}
def get_domain_dnssec(self, domain_name):
"""
Get zone DNSSEC information
Get domain DNSSEC information
"""
import urllib.parse
domain = Domain.query.filter(Domain.name == domain_name).first()
if domain:
headers = {'X-API-Key': self.PDNS_API_KEY}
@ -685,7 +617,7 @@ class Domain(db.Model):
urljoin(
self.PDNS_STATS_URL, self.API_EXTENDED_URL +
'/servers/localhost/zones/{0}/cryptokeys'.format(
urllib.parse.quote_plus(domain.name))),
domain.name)),
headers=headers,
timeout=int(Setting().get('pdns_api_timeout')),
method='GET',
@ -693,13 +625,13 @@ class Domain(db.Model):
if 'error' in jdata:
return {
'status': 'error',
'msg': 'DNSSEC is not enabled for this zone'
'msg': 'DNSSEC is not enabled for this domain'
}
else:
return {'status': 'ok', 'dnssec': jdata}
except Exception as e:
current_app.logger.error(
'Cannot get zone dnssec. DETAIL: {0}'.format(e))
'Cannot get domain dnssec. DETAIL: {0}'.format(e))
return {
'status':
'error',
@ -707,26 +639,22 @@ class Domain(db.Model):
'There was something wrong, please contact administrator'
}
else:
return {'status': 'error', 'msg': 'This zone does not exist'}
return {'status': 'error', 'msg': 'This domain does not exist'}
def enable_domain_dnssec(self, domain_name):
"""
Enable zone DNSSEC
Enable domain DNSSEC
"""
import urllib.parse
domain = Domain.query.filter(Domain.name == domain_name).first()
if domain:
headers = {'X-API-Key': self.PDNS_API_KEY, 'Content-Type': 'application/json'}
headers = {'X-API-Key': self.PDNS_API_KEY}
try:
# Enable API-RECTIFY for domain, BEFORE activating DNSSEC
post_data = {"api_rectify": True}
jdata = utils.fetch_json(
urljoin(
self.PDNS_STATS_URL, self.API_EXTENDED_URL +
'/servers/localhost/zones/{0}'.format(
urllib.parse.quote_plus(domain.name)
)),
'/servers/localhost/zones/{0}'.format(domain.name)),
headers=headers,
timeout=int(Setting().get('pdns_api_timeout')),
method='PUT',
@ -736,7 +664,7 @@ class Domain(db.Model):
return {
'status': 'error',
'msg':
'API-RECTIFY could not be enabled for this zone',
'API-RECTIFY could not be enabled for this domain',
'jdata': jdata
}
@ -746,8 +674,7 @@ class Domain(db.Model):
urljoin(
self.PDNS_STATS_URL, self.API_EXTENDED_URL +
'/servers/localhost/zones/{0}/cryptokeys'.format(
urllib.parse.quote_plus(domain.name)
)),
domain.name)),
headers=headers,
timeout=int(Setting().get('pdns_api_timeout')),
method='POST',
@ -758,7 +685,7 @@ class Domain(db.Model):
'status':
'error',
'msg':
'Cannot enable DNSSEC for this zone. Error: {0}'.
'Cannot enable DNSSEC for this domain. Error: {0}'.
format(jdata['error']),
'jdata':
jdata
@ -778,24 +705,22 @@ class Domain(db.Model):
}
else:
return {'status': 'error', 'msg': 'This zone does not exist'}
return {'status': 'error', 'msg': 'This domain does not exist'}
def delete_dnssec_key(self, domain_name, key_id):
"""
Remove keys DNSSEC
"""
import urllib.parse
domain = Domain.query.filter(Domain.name == domain_name).first()
if domain:
headers = {'X-API-Key': self.PDNS_API_KEY, 'Content-Type': 'application/json'}
headers = {'X-API-Key': self.PDNS_API_KEY}
try:
# Deactivate DNSSEC
jdata = utils.fetch_json(
urljoin(
self.PDNS_STATS_URL, self.API_EXTENDED_URL +
'/servers/localhost/zones/{0}/cryptokeys/{1}'.format(
urllib.parse.quote_plus(domain.name), key_id)),
domain.name, key_id)),
headers=headers,
timeout=int(Setting().get('pdns_api_timeout')),
method='DELETE',
@ -805,13 +730,13 @@ class Domain(db.Model):
'status':
'error',
'msg':
'Cannot disable DNSSEC for this zone. Error: {0}'.
'Cannot disable DNSSEC for this domain. Error: {0}'.
format(jdata['error']),
'jdata':
jdata
}
# Disable API-RECTIFY for zone, AFTER deactivating DNSSEC
# Disable API-RECTIFY for domain, AFTER deactivating DNSSEC
post_data = {"api_rectify": False}
jdata = utils.fetch_json(
urljoin(
@ -826,7 +751,7 @@ class Domain(db.Model):
return {
'status': 'error',
'msg':
'API-RECTIFY could not be disabled for this zone',
'API-RECTIFY could not be disabled for this domain',
'jdata': jdata
}
@ -845,26 +770,25 @@ class Domain(db.Model):
}
else:
return {'status': 'error', 'msg': 'This zone does not exist'}
return {'status': 'error', 'msg': 'This domain does not exist'}
def assoc_account(self, account_id, update=True):
def assoc_account(self, account_id):
"""
Associate account with a zone, specified by account id
Associate domain with a domain, specified by account id
"""
domain_name = self.name
# Sanity check - domain name
if domain_name == "":
return {'status': False, 'msg': 'No zone name specified'}
return {'status': False, 'msg': 'No domain name specified'}
# read domain and check that it exists
domain = Domain.query.filter(Domain.name == domain_name).first()
if not domain:
return {'status': False, 'msg': 'Zone does not exist'}
return {'status': False, 'msg': 'Domain does not exist'}
headers = {'X-API-Key': self.PDNS_API_KEY, 'Content-Type': 'application/json'}
headers = {'X-API-Key': self.PDNS_API_KEY}
account_name_old = Account().get_name_by_id(domain.account_id)
account_name = Account().get_name_by_id(account_id)
post_data = {"account": account_name}
@ -884,32 +808,24 @@ class Domain(db.Model):
current_app.logger.error(jdata['error'])
return {'status': 'error', 'msg': jdata['error']}
else:
if update:
self.update()
msg_str = 'Account changed for zone {0} successfully'
self.update()
msg_str = 'Account changed for domain {0} successfully'
current_app.logger.info(msg_str.format(domain_name))
history = History(msg='Update zone {0} associate account {1}'.format(domain.name, 'none' if account_name == '' else account_name),
detail = json.dumps({
'assoc_account': 'None' if account_name == '' else account_name,
'dissoc_account': 'None' if account_name_old == '' else account_name_old
}),
created_by=current_user.username)
history.add()
return {'status': 'ok', 'msg': 'account changed successfully'}
except Exception as e:
current_app.logger.debug(e)
current_app.logger.debug(traceback.format_exc())
msg_str = 'Cannot change account for zone {0}'
msg_str = 'Cannot change account for domain {0}'
current_app.logger.error(msg_str.format(domain_name))
return {
'status': 'error',
'msg': 'Cannot change account for this zone.'
'msg': 'Cannot change account for this domain.'
}
def get_account(self):
"""
Get current account associated with this zone
Get current account associated with this domain
"""
domain = Domain.query.filter(Domain.name == self.name).first()
@ -918,7 +834,7 @@ class Domain(db.Model):
def is_valid_access(self, user_id):
"""
Check if the user is allowed to access this
zone name
domain name
"""
return db.session.query(Domain) \
.outerjoin(DomainUser, Domain.id == DomainUser.domain_id) \
@ -929,18 +845,3 @@ class Domain(db.Model):
DomainUser.user_id == user_id,
AccountUser.user_id == user_id
)).filter(Domain.id == self.id).first()
# Return None if this zone does not exist as record,
# Return the parent zone that hold the record if exist
def is_overriding(self, domain_name):
upper_domain_name = '.'.join(domain_name.split('.')[1:])
while upper_domain_name != '':
if self.get_id_by_name(upper_domain_name.rstrip('.')) != None:
upper_domain = self.get_domain_info(upper_domain_name)
if 'rrsets' in upper_domain:
for r in upper_domain['rrsets']:
if domain_name.rstrip('.') in r['name'].rstrip('.'):
current_app.logger.error('Zone already exists as a record: {} under zone: {}'.format(r['name'].rstrip('.'), upper_domain_name))
return upper_domain_name
upper_domain_name = '.'.join(upper_domain_name.split('.')[1:])
return None

Some files were not shown because too many files have changed in this diff Show More