Binary Options Indicators Free Download

Why the Genie Warlock's "Bottled Respite" is overwhelmingly powerful

Despite being limited to a single use per long rest ( Which can be as short as 4 hours with High Elf Trance ), if it is preserved according to the UA description, it stands to be one of, if not the most busted features in 5E history. For just a one-level dip into Warlock, and a free action, it has the potential functionality of several high level spells combined, with the value of dipping into the class just for it being fairly high.
First, lets overview all the benefits granted by the ability in its own right, then go over all of the ways to combine it with other abilities as well:
------------------‐------------------
Now for the fun part, which is all that can be done with a little bit of magical help, though it can only be a partial list since the possibilities are endless. Let's dive in.
That is all I can think of for now, and remember what we're talking about - A level 1 Warlock action per long rest. It has unbelievable value.
submitted by Orwellze to dndnext [link] [comments]

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

Unable to run custom scripts via dmenu when it is started with i3's mod+d key

I have encountered strange behaviour regarding dmenu_run and dmenu_recency. When I run dmenu_run or dmenu_recency from terminal and then execute simple script like echo "test" value test is printed in the terminal. However when I run dmenu_recency or dmenu_run with i3 keybinding like:
bindsym $mod+d exec --no-startup-id dmenu_recency
and then execute same simple script, then nothing happens. Dmenu lunches for other installed programs works well, it just doesen't work for execuution of my custom scripts.
What am I missing here? I suspect I have to add something else to my scripts but i dont know what. For now it is jsut plain this:
echo "test"

EDIT: Ok maybe script: echo "test" is not the best example since it is true that there is no opened terminal to write to.
But same thing happens if I try to execute script that looks like this:
code ~/.i3/config
This jsut opens the i33 config file with visual studio code. Again this works when I execute it via dmenu_run that was called from existing termina but it doesen't work when executed via dmenu_run that was called via i3 keybinding mod+d
EDIT 2:
.i3/config
# i3 config file (v4) # Please see http://i3wm.org/docs/userguide.html for a complete reference! # Set mod key (Mod1=, Mod4=) set $mod Mod4 # My testing shortcuts bindsym $mod+c exec code bindsym $mod+Shift+x exec terminal; exec terminal bindsym $mod+F4 exec /home/erik/Programs/pycharm-community-2020.2.1/bin/pycharm.sh bindsym $mod+Shift+F2 exec /home/erik/CustomScripts/google_calendar # CONFIGURABLE PRINTSCREENS OPTIONS # take a screenshot of a screen region and copy it to a clipboard #bindsym --release Shift+Print exec "ScreenCapture.sh -s /home/erik/Pictures/Screenshots/" # take a screenshot of a whole window and copy it to a clipboard #bindsym --release Print exec "ScreenCapture.sh /home/erik/Pictures/Screenshots/" # set default desktop layout (default is tiling) # workspace_layout tabbed  # Configure border style  default_border pixel 2 default_floating_border normal # Hide borders hide_edge_borders none # change borders bindsym $mod+u border none bindsym $mod+y border pixel 1 bindsym $mod+n border normal # You can also use any non-zero value if you'd like to have a border (this is to prevent issues with gaps) # for_window [class=".*"] border pixel 1 # Font for window titles. Will also be used by the bar unless a different font # is used in the bar {} block below. font xft:URWGothic-Book 11 # Use Mouse+$mod to drag floating windows floating_modifier $mod # start a terminal bindsym $mod+Return exec terminal # kill focused window bindsym $mod+Shift+q kill # start program launcher # bindsym $mod+d exec --no-startup-id dmenu_recency bindsym $mod+d exec --no-startup-id home/erik/CustomScripts/redit_solution dmenu_recency # launch categorized menu bindsym $mod+z exec --no-startup-id morc_menu ################################################################################################ ## sound-section - DO NOT EDIT if you wish to automatically upgrade Alsa -> Pulseaudio later! ## ################################################################################################ #exec --no-startup-id volumeicon #bindsym $mod+Ctrl+m exec terminal -e 'alsamixer' exec --no-startup-id start-pulseaudio-x11 exec --no-startup-id pa-applet bindsym $mod+Ctrl+m exec pavucontrol ################################################################################################ # Screen brightness controls # bindsym XF86MonBrightnessUp exec "xbacklight -inc 10; notify-send 'brightness up'" # bindsym XF86MonBrightnessDown exec "xbacklight -dec 10; notify-send 'brightness down'" # Start Applications bindsym $mod+Ctrl+b exec terminal -e 'bmenu' bindsym $mod+F2 exec chromium bindsym $mod+F3 exec pcmanfm # bindsym $mod+F3 exec ranger bindsym $mod+Shift+F3 exec pcmanfm_pkexec bindsym $mod+F5 exec terminal -e 'mocp' bindsym $mod+t exec --no-startup-id pkill compton bindsym $mod+Ctrl+t exec --no-startup-id compton -b bindsym $mod+Shift+d --release exec "killall dunst; exec notify-send 'restart dunst'" bindsym Print exec --no-startup-id i3-scrot bindsym $mod+Print --release exec --no-startup-id i3-scrot -w bindsym $mod+Shift+Print --release exec --no-startup-id i3-scrot -s bindsym $mod+Shift+h exec xdg-open /usshare/doc/manjaro/i3_help.pdf bindsym $mod+Ctrl+x --release exec --no-startup-id xkill focus_follows_mouse no # change focus bindsym $mod+j focus left bindsym $mod+k focus down bindsym $mod+l focus up bindsym $mod+semicolon focus right # alternatively, you can use the cursor keys: bindsym $mod+Left focus left bindsym $mod+Down focus down bindsym $mod+Up focus up bindsym $mod+Right focus right # move focused window bindsym $mod+Shift+j move left bindsym $mod+Shift+k move down bindsym $mod+Shift+l move up bindsym $mod+Shift+semicolon move right # alternatively, you can use the cursor keys: bindsym $mod+Shift+Left move left bindsym $mod+Shift+Down move down bindsym $mod+Shift+Up move up bindsym $mod+Shift+Right move right # workspace back and forth (with/without active container) workspace_auto_back_and_forth yes bindsym $mod+b workspace back_and_forth bindsym $mod+Shift+b move container to workspace back_and_forth; workspace back_and_forth # split orientation bindsym $mod+h split h;exec notify-send 'tile horizontally' bindsym $mod+v split v;exec notify-send 'tile vertically' bindsym $mod+q split toggle # toggle fullscreen mode for the focused container bindsym $mod+f fullscreen toggle # change container layout (stacked, tabbed, toggle split) bindsym $mod+s layout stacking bindsym $mod+w layout tabbed bindsym $mod+e layout toggle split # toggle tiling / floating bindsym $mod+Shift+space floating toggle # change focus between tiling / floating windows bindsym $mod+space focus mode_toggle # toggle sticky bindsym $mod+Shift+s sticky toggle # focus the parent container bindsym $mod+a focus parent # move the currently focused window to the scratchpad bindsym $mod+Shift+minus move scratchpad # Show the next scratchpad window or hide the focused scratchpad window. # If there are multiple scratchpad windows, this command cycles through them. bindsym $mod+minus scratchpad show #navigate workspaces next / previous bindsym $mod+Ctrl+Right workspace next bindsym $mod+Ctrl+Left workspace prev # Workspace names # to display names or symbols instead of plain workspace numbers you can use # something like: set $ws1 1:mail # set $ws2 2: set $ws1 1 set $ws2 2 set $ws3 3 set $ws4 4 set $ws5 5 set $ws6 6 set $ws7 7 set $ws8 8 # switch to workspace bindsym $mod+1 workspace $ws1 bindsym $mod+2 workspace $ws2 bindsym $mod+3 workspace $ws3 bindsym $mod+4 workspace $ws4 bindsym $mod+5 workspace $ws5 bindsym $mod+6 workspace $ws6 bindsym $mod+7 workspace $ws7 bindsym $mod+8 workspace $ws8 # Move focused container to workspace bindsym $mod+Ctrl+1 move container to workspace $ws1 bindsym $mod+Ctrl+2 move container to workspace $ws2 bindsym $mod+Ctrl+3 move container to workspace $ws3 bindsym $mod+Ctrl+4 move container to workspace $ws4 bindsym $mod+Ctrl+5 move container to workspace $ws5 bindsym $mod+Ctrl+6 move container to workspace $ws6 bindsym $mod+Ctrl+7 move container to workspace $ws7 bindsym $mod+Ctrl+8 move container to workspace $ws8 # Move to workspace with focused container bindsym $mod+Shift+1 move container to workspace $ws1; workspace $ws1 bindsym $mod+Shift+2 move container to workspace $ws2; workspace $ws2 bindsym $mod+Shift+3 move container to workspace $ws3; workspace $ws3 bindsym $mod+Shift+4 move container to workspace $ws4; workspace $ws4 bindsym $mod+Shift+5 move container to workspace $ws5; workspace $ws5 bindsym $mod+Shift+6 move container to workspace $ws6; workspace $ws6 bindsym $mod+Shift+7 move container to workspace $ws7; workspace $ws7 bindsym $mod+Shift+8 move container to workspace $ws8; workspace $ws8 # Open applications on specific workspaces # assign [class="Thunderbird"] $ws1 # assign [class="Pale moon"] $ws2 # assign [class="Pcmanfm"] $ws3 # assign [class="Skype"] $ws5 # Open specific applications in floating mode for_window [title="alsamixer"] floating enable border pixel 1 for_window [class="calamares"] floating enable border normal for_window [class="Clipgrab"] floating enable for_window [title="File Transfer*"] floating enable for_window [class="fpakman"] floating enable for_window [class="Galculator"] floating enable border pixel 1 for_window [class="GParted"] floating enable border normal for_window [title="i3_help"] floating enable sticky enable border normal for_window [class="Lightdm-settings"] floating enable for_window [class="Lxappearance"] floating enable sticky enable border normal for_window [class="Manjaro-hello"] floating enable for_window [class="Manjaro Settings Manager"] floating enable border normal for_window [title="MuseScore: Play Panel"] floating enable for_window [class="Nitrogen"] floating enable sticky enable border normal for_window [class="Oblogout"] fullscreen enable for_window [class="octopi"] floating enable for_window [title="About Pale Moon"] floating enable for_window [class="Pamac-manager"] floating enable for_window [class="Pavucontrol"] floating enable for_window [class="qt5ct"] floating enable sticky enable border normal for_window [class="Qtconfig-qt4"] floating enable sticky enable border normal for_window [class="Simple-scan"] floating enable border normal for_window [class="(?i)System-config-printer.py"] floating enable border normal for_window [class="Skype"] floating enable border normal for_window [class="Timeset-gui"] floating enable border normal for_window [class="(?i)virtualbox"] floating enable border normal for_window [class="Xfburn"] floating enable # switch to workspace with urgent window automatically for_window [urgent=latest] focus # reload the configuration file bindsym $mod+Shift+c reload # restart i3 inplace (preserves your layout/session, can be used to upgrade i3) bindsym $mod+Shift+r restart # exit i3 (logs you out of your X session) bindsym $mod+Shift+e exec "i3-nagbar -t warning -m 'You pressed the exit shortcut. Do you really want to exit i3? This will end your X session.' -b 'Yes, exit i3' 'i3-msg exit'" # Set shut down, restart and locking features bindsym $mod+0 mode "$mode_system" set $mode_system (l)ock, (e)xit, switch_(u)ser, (s)uspend, (h)ibernate, (r)eboot, (Shift+s)hutdown mode "$mode_system" { bindsym l exec --no-startup-id i3exit lock, mode "default" bindsym s exec --no-startup-id i3exit suspend, mode "default" bindsym u exec --no-startup-id i3exit switch_user, mode "default" bindsym e exec --no-startup-id i3exit logout, mode "default" bindsym h exec --no-startup-id i3exit hibernate, mode "default" bindsym r exec --no-startup-id i3exit reboot, mode "default" bindsym Shift+s exec --no-startup-id i3exit shutdown, mode "default" # exit system mode: "Enter" or "Escape" bindsym Return mode "default" bindsym Escape mode "default" } # Resize window (you can also use the mouse for that) bindsym $mod+r mode "resize" mode "resize" { # These bindings trigger as soon as you enter the resize mode # Pressing left will shrink the window’s width. # Pressing right will grow the window’s width. # Pressing up will shrink the window’s height. # Pressing down will grow the window’s height. bindsym j resize shrink width 5 px or 5 ppt bindsym k resize grow height 5 px or 5 ppt bindsym l resize shrink height 5 px or 5 ppt bindsym semicolon resize grow width 5 px or 5 ppt # same bindings, but for the arrow keys bindsym Left resize shrink width 5 px or 5 ppt bindsym Down resize grow height 5 px or 5 ppt bindsym Up resize shrink height 5 px or 5 ppt bindsym Right resize grow width 5 px or 5 ppt # exit resize mode: Enter or Escape bindsym Return mode "default" bindsym Escape mode "default" } # Lock screen bindsym $mod+9 exec --no-startup-id blurlock # Autostart applications exec --no-startup-id /uslib/polkit-gnome/polkit-gnome-authentication-agent-1 exec --no-startup-id nitrogen --restore; sleep 1; compton -b # exec --no-startup-id manjaro-hello exec --no-startup-id nm-applet exec --no-startup-id xfce4-power-manager exec --no-startup-id pamac-tray exec --no-startup-id clipit exec --no-startup-id picom # exec --no-startup-id blueman-applet # exec_always --no-startup-id sbxkb exec --no-startup-id start_conky_maia # exec --no-startup-id start_conky_green exec --no-startup-id xautolock -time 10 -locker blurlock exec_always --no-startup-id ff-theme-util exec_always --no-startup-id fix_xcursor # Color palette used for the terminal ( ~/.Xresources file ) # Colors are gathered based on the documentation: # https://i3wm.org/docs/userguide.html#xresources # Change the variable name at the place you want to match the color # of your terminal like this: # [example] # If you want your bar to have the same background color as your # terminal background change the line 362 from: # background #14191D # to: # background $term_background # Same logic applied to everything else. set_from_resource $term_background background set_from_resource $term_foreground foreground set_from_resource $term_color0 color0 set_from_resource $term_color1 color1 set_from_resource $term_color2 color2 set_from_resource $term_color3 color3 set_from_resource $term_color4 color4 set_from_resource $term_color5 color5 set_from_resource $term_color6 color6 set_from_resource $term_color7 color7 set_from_resource $term_color8 color8 set_from_resource $term_color9 color9 set_from_resource $term_color10 color10 set_from_resource $term_color11 color11 set_from_resource $term_color12 color12 set_from_resource $term_color13 color13 set_from_resource $term_color14 color14 set_from_resource $term_color15 color15 # Start i3bar to display a workspace bar (plus the system information i3status if available) bar { i3bar_command i3bar status_command i3status position bottom ## please set your primary output first. Example: 'xrandr --output eDP1 --primary' # tray_output primary # tray_output eDP1 bindsym button4 nop bindsym button5 nop # font xft:URWGothic-Book 11 strip_workspace_numbers yes colors { background #222D31 statusline #F9FAF9 separator #ff9a1f # border backgr. text focused_workspace #ff9a1f #ff9a1f #292F34 active_workspace #595B5B #353836 #FDF6E3 inactive_workspace #595B5B #222D31 #EEE8D5 binding_mode #16a085 #2C2C2C #F9FAF9 urgent_workspace #16a085 #FDF6E3 #E5201D } } # hide/unhide i3status bar bindsym $mod+m bar mode toggle # Theme colors # class border backgr. text indic. child_border client.focused #ff9a1f #ff9a1f #000000 #ff9a1f client.focused_inactive #2F3D44 #2F3D44 #1ABC9C #454948 client.unfocused #2F3D44 #2F3D44 #1ABC9C #454948 client.urgent #CB4B16 #FDF6E3 #1ABC9C #268BD2 client.placeholder #000000 #0c0c0c #ffffff #000000 client.background #2B2C2B ############################# ### settings for i3-gaps: ### ############################# # Set inneouter gaps gaps inner 0 gaps outer 0 # Additionally, you can issue commands with the following syntax. This is useful to bind keys to changing the gap size. # gaps inner|outer current|all set|plus|minus  # gaps inner all set 10 # gaps outer all plus 5 # Smart gaps (gaps used if only more than one container on the workspace) smart_gaps on # Smart borders (draw borders around container only if it is not the only container on this workspace) # on|no_gaps (on=always activate and no_gaps=only activate if the gap size to the edge of the screen is 0) smart_borders on # Press $mod+Shift+g to enter the gap mode. Choose o or i for modifying outeinner gaps. Press one of + / - (in-/decrement for current workspace) or 0 (remove gaps for current workspace). If you also press Shift with these keys, the change will be global for all workspaces. set $mode_gaps Gaps: (o) outer, (i) inner set $mode_gaps_outer Outer Gaps: +|-|0 (local), Shift + +|-|0 (global) set $mode_gaps_inner Inner Gaps: +|-|0 (local), Shift + +|-|0 (global) bindsym $mod+Shift+g mode "$mode_gaps" mode "$mode_gaps" { bindsym o mode "$mode_gaps_outer" bindsym i mode "$mode_gaps_inner" bindsym Return mode "default" bindsym Escape mode "default" } mode "$mode_gaps_inner" { bindsym plus gaps inner current plus 5 bindsym minus gaps inner current minus 5 bindsym 0 gaps inner current set 0 bindsym Shift+plus gaps inner all plus 5 bindsym Shift+minus gaps inner all minus 5 bindsym Shift+0 gaps inner all set 0 bindsym Return mode "default" bindsym Escape mode "default" } mode "$mode_gaps_outer" { bindsym plus gaps outer current plus 5 bindsym minus gaps outer current minus 5 bindsym 0 gaps outer current set 0 bindsym Shift+plus gaps outer all plus 5 bindsym Shift+minus gaps outer all minus 5 bindsym Shift+0 gaps outer all set 0 bindsym Return mode "default" bindsym Escape mode "default" } 
.bashrc
# # ~/.bashrc # [[ $- != *i* ]] && return colors() { local fgc bgc vals seq0 printf "Color escapes are %s\n" '\e[${value};...;${value}m' printf "Values 30..37 are \e[33mforeground colors\e[m\n" printf "Values 40..47 are \e[43mbackground colors\e[m\n" printf "Value 1 gives a \e[1mbold-faced look\e[m\n\n" # foreground colors for fgc in {30..37}; do # background colors for bgc in {40..47}; do fgc=${fgc#37} # white bgc=${bgc#40} # black vals="${fgc:+$fgc;}${bgc}" vals=${vals%%;} seq0="${vals:+\e[${vals}m}" printf " %-9s" "${seq0:-(default)}" printf " ${seq0}TEXT\e[m" printf " \e[${vals:+${vals+$vals;}}1mBOLD\e[m" done echo; echo done } [ -r /usshare/bash-completion/bash_completion ] && . /usshare/bash-completion/bash_completion # Change the window title of X terminals case ${TERM} in xterm*|rxvt*|Eterm*|aterm|kterm|gnome*|interix|konsole*) PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/\~}\007"' ;; screen*) PROMPT_COMMAND='echo -ne "\033_${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/\~}\033\\"' ;; esac use_color=true # Set colorful PS1 only on colorful terminals. # dircolors --print-database uses its own built-in database # instead of using /etc/DIR_COLORS. Try to use the external file # first to take advantage of user additions. Use internal bash # globbing instead of external grep binary. safe_term=${TERM//[^[:alnum:]]/?} # sanitize TERM match_lhs="" [[ -f ~/.dir_colors ]] && match_lhs="${match_lhs}$(<~/.dir_colors)" [[ -f /etc/DIR_COLORS ]] && match_lhs="${match_lhs}$(/dev/null \ && match_lhs=$(dircolors --print-database) [[ $'\n'${match_lhs} == *$'\n'"TERM "${safe_term}* ]] && use_color=true if ${use_color} ; then # Enable colors for ls, etc. Prefer ~/.dir_colors #64489 if type -P dircolors >/dev/null ; then if [[ -f ~/.dir_colors ]] ; then eval $(dircolors -b ~/.dir_colors) elif [[ -f /etc/DIR_COLORS ]] ; then eval $(dircolors -b /etc/DIR_COLORS) fi fi if [[ ${EUID} == 0 ]] ; then PS1='\[\033[01;31m\][\h\[\033[01;36m\] \W\[\033[01;31m\]]\$\[\033[00m\] ' else PS1='\[\033[01;32m\][\[email protected]\h\[\033[01;37m\] \W\[\033[01;32m\]]\$\[\033[00m\] ' fi alias ls='ls --color=auto' alias grep='grep --colour=auto' alias egrep='egrep --colour=auto' alias fgrep='fgrep --colour=auto' else if [[ ${EUID} == 0 ]] ; then # show [email protected] when we don't have colors PS1='\[email protected]\h \W \$ ' else PS1='\[email protected]\h \w \$ ' fi fi unset use_color safe_term match_lhs sh alias cp="cp -i" # confirm before overwriting something alias df='df -h' # human-readable sizes alias free='free -m' # show sizes in MB alias np='nano -w PKGBUILD' alias more=less xhost +local:root > /dev/null 2>&1 complete -cf sudo # Bash won't get SIGWINCH if another process is in the foreground. # Enable checkwinsize so that bash will check the terminal size when # it regains control. #65623 # http://cnswww.cns.cwru.edu/~chet/bash/FAQ (E11) shopt -s checkwinsize shopt -s expand_aliases # export QT_SELECT=4 # Enable history appending instead of overwriting. #139609 shopt -s histappend # # # ex - archive extractor # # usage: ex  ex () { if [ -f $1 ] ; then case $1 in *.tar.bz2) tar xjf $1 ;; *.tar.gz) tar xzf $1 ;; *.bz2) bunzip2 $1 ;; *.rar) unrar x $1 ;; *.gz) gunzip $1 ;; *.tar) tar xf $1 ;; *.tbz2) tar xjf $1 ;; *.tgz) tar xzf $1 ;; *.zip) unzip $1 ;; *.Z) uncompress $1;; *.7z) 7z x $1 ;; *) echo "'$1' cannot be extracted via ex()" ;; esac else echo "'$1' is not a valid file" fi } #Custom programs export PATH="/home/uusePrograms/pycharm-community-2020.2.1/bin:$PATH" # Custom scritps export PATH="/home/useCustomScripts:$PATH" 

submitted by Amuoeba8 to i3wm [link] [comments]

Differences between LISP 1.5 and Common Lisp, Part 1:

[Edit: I didn't mean to put a colon in the title.]
In this post we'll be looking at some of the things that make LISP 1.5 and Common Lisp different. There isn't too much surviving LISP 1.5 code, but some of the code that is still around is interesting and worthy of study.
Here are some conventions used in this post of which you might take notice:
Sources are linked sometimes below, but here is a list of links that were helpful while writing this:
The differences between LISP 1.5 and Common Lisp can be classified into the following groups:
  1. Superficial differences—matters of syntax
  2. Conventional differences—matters of code style and form
  3. Fundamental differences—matters of semantics
  4. Library differences—matters of available functions
This post will go through the first three of these groups in that order. A future post will discuss library differences, except for some functions dealing with character-based input and output, since they are a little world unto their own.
[Originally the library differences were part of this post, but it exceeded the length limit on posts (40000 characters)].

Superficial differences.

LISP 1.5 was used initially on computers that had very limited character sets. The machine on which it ran at MIT, the IBM 7090, used a six-bit, binary-coded decimal encoding for characters, which could theoretically represent up to sixty-four characters. In practice, only fourty-six were widely used. The repertoire of this character set consisted of the twenty-six uppercase letters, the nine digits, the blank character '', and the ten special characters '-', '/', '=', '.', '$', ',', '(', ')', '*', and '+'. You might note the absence of the apostrophe/single quote—there was no shorthand for the quote operator in LISP 1.5 because no sensical character was available.
When the LISP 1.5 system read input from cards, it treated the end of a card not like a blank character (as is done in C, TeX, etc.), but as nothing. Therefore the first character of a symbol's name could be the last character of a card, the remaining characters appearing at the beginning of the next card. Lisp's syntax allowed for the omission of almost all whitespace besides that which was used as delimiters to separate tokens.
List syntax. Lists were contained within parentheses, as is the case in Common Lisp. From the beginning Lisp had the consing dot, which was written as a period in LISP 1.5; the interaction between the period when used as the consing dot and the period when used as the decimal point will be described shortly.
In LISP 1.5, the comma was equivalent to a blank character; both could be used to delimit items within a list. The LISP I Programmer's Manual, p. 24, tells us that
The commas in writing S-expressions may be omitted. This is an accident.
Number syntax. Numbers took one of three forms: fixed-point integers, floating-point numbers, and octal numbers. (Of course octal numbers were just an alternative notation for the fixed-point integers.)
Fixed-point integers were written simply as the decimal representation of the integers, with an optional sign. It isn't explicitly mentioned whether a plus sign is allowed in this case or if only a minus sign is, but floating-point syntax does allow an initial plus sign, so it makes sense that the fixed-point number syntax would as well.
Floating-point numbers had the syntax described by the following context-free grammar, where a term in square brackets indicates that the term is optional:
float: [sign] integer '.' [integer] exponent [sign] integer '.' integer [exponent] exponent: 'E' [sign] digit [digit] integer: digit integer digit digit: one of '0' '1' '2' '3' '4' '5' '6' '7' '8' '9' sign: one of '+' '-' 
This grammar generates things like 100.3 and 1.E5 but not things like .01 or 14E2 or 100.. The manual seems to imply that if you wrote, say, (100. 200), the period would be treated as a consing dot [the result being (cons 100 200)].
Floating-point numbers are limited in absolute value to the interval (2-128, 2128), and eight digits are significant.
Octal numbers are defined by the following grammar:
octal: [sign] octal-digits 'Q' [integer] octal-digits: octal-digit [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] octal-digit: one of '0' '1' '2' '3' '4' '5' '6' '7' 
The optional integer following 'Q' is a scale factor, which is a decimal integer representing an exponent with a base of 8. Positive octal numbers behave as one would expect: The value is shifted to the left 3×s bits, where s is the scale factor. Octal was useful on the IBM 7090, since it used thirty-six-bit words; twelve octal digits (which is the maximum allowed in an octal number in LISP 1.5) thus represent a single word in a convenient way that is more compact than binary (but still being easily convertable to and from binary). If the number has a negative sign, then the thirty-sixth bit is logically ored with 1.
The syntax of Common Lisp's numbers is a superset of that of LISP 1.5. The only major difference is in the notation of octal numbers; Common Lisp uses the sharpsign reader macro for that purpose. Because of the somewhat odd semantics of the minus sign in octal numbers in LISP 1.5, it is not necessarily trivial to convert a LISP 1.5 octal number into a Common Lisp expression resulting in the same value.
Symbol syntax. Symbol names can be up to thirty characters in length. While the actual name of a symbol was kept on its property list under the pname indicator and could be any sequence of thirty characters, the syntax accepted by the read program for symbols was limited in a few ways. First, it must not begin with a digit or with either of the characters '+' or '-', and the first two characters cannot be '$'. Otherwise, all the alphanumeric characters, along with the special characters '+', '-', '=', '*', '/', and '$'. The fact that a symbol can't begin with a sign character or a digit has to do with the number syntax; the fact that a symbol can't begin with '$$' has to do with the mechanism by which the LISP 1.5 reader allowed you to write characters that are usually not allowed in symbols, which is described next.
Two dollar signs initiated the reading of what we today might call an "escape sequence". An escape sequence had the form "$$xSx", where x was any character and S was a sequence of up to thirty characters not including x. For example, $$x()x would get the symbol whose name is '()' and would print as '()'. Thus it is similar in purpose to Common Lisp's | syntax. There is a significant difference: It could not be embedded within a symbol, unlike Common Lisp's |. In this respect it is closer to Maclisp's | reader macro (which created a single token) than it is to Common Lisp's multiple escape character. In LISP 1.5, "A$$X()X$" would be read as (1) the symbol A$$X, (2) the empty list, (3) the symbol X.
The following code sets up a $ reader macro so that symbols using the $$ notation will be read in properly, while leaving things like $eof$ alone.
(defun dollar-sign-reader (stream character) (declare (ignore character)) (let ((next (read-char stream t nil t))) (cond ((char= next #\$) (let ((terminator (read-char stream t nil t))) (values (intern (with-output-to-string (name) (loop for c := (read-char stream t nil t) until (char= c terminator) do (write-char c name))))))) (t (unread-char next stream) (with-standard-io-syntax (read (make-concatenated-stream (make-string-input-stream "$") stream) t nil t)))))) (set-macro-character #\$ #'dollar-sign-reader t) 

Conventional differences.

LISP 1.5 is an old programming language. Generally, compared to its contemporaries (such as FORTRANs I–IV), it holds up well to modern standards, but sometimes its age does show. And there were some aspects of LISP 1.5 that might be surprising to programmers familiar only with Common Lisp or a Scheme.
M-expressions. John McCarthy's original concept of Lisp was a language with a syntax like this (from the LISP 1.5 Programmer's Manual, p. 11):
equal[x;y]=[atom[x]→[atom[y]→eq[x;y]; T→F]; equal[car[x];car[Y]]→equal[cdr[x];cdr[y]]; T→F] 
There are several things to note. First is the entirely different phrase structure. It's is an infix language looking much closer to mathematics than the Lisp we know and love. Square brackets are used instead of parentheses, and semicolons are used instead of commas (or blanks). When square brackets do not enclose function arguments (or parameters when to the left of the equals sign), they set up a conditional expression; the arrows separate predicate expressions and consequent expressions.
If that was Lisp, then where do s-expressions come in? Answer: quoting. In the m-expression notation, uppercase strings of characters represent quoted symbols, and parenthesized lists represent quoted lists. Here is an example from page 13 of the manual:
λ[[x;y];cons[car[x];y]][(A B);(C D)] 
As an s-expressions, this would be
((lambda (x y) (cons (car x) y)) '(A B) '(C D)) 
The majority of the code in the manual is presented in m-expression form.
So why did s-expressions stick? There are a number of reasons. The earliest Lisp interpreter was a translation of the program for eval in McCarthy's paper introducing Lisp, which interpreted quoted data; therefore it read code in the form of s-expressions. S-expressions are much easier for a computer to parse than m-expressions, and also more consistent. (Also, the character set mentioned above includes neither square brackets nor a semicolon, let alone a lambda character.) But in publications m-expressions were seen frequently; perhaps the syntax was seen as a kind of "Lisp pseudocode".
Comments. LISP 1.5 had no built-in commenting mechanism. It's easy enough to define a comment operator in the language, but it seemed like nobody felt a need for them.
Interestingly, FORTRAN I had comments. Assembly languages of the time sort of had comments, in that they had a portion of each line/card that was ignored in which you could put any text. FORTRAN was ahead of its time.
(Historical note: The semicolon comment used in Common Lisp comes from Maclisp. Maclisp likely got it from PDP-10 assembly language, which let a semicolon and/or a line break terminate a statement; thus anything following a semicolon is ignored. The convention of octal numbers by default, decimal numbers being indicated by a trailing decimal point, of Maclisp too comes from the assembly language.)
Code formatting. The code in the manual that isn't written using m-expression syntax is generally lacking in meaningful indentation and spacing. Here is an example (p. 49):
(TH1 (LAMBDA (A1 A2 A C) (COND ((NULL A) (TH2 A1 A2 NIL NIL C)) (T (OR (MEMBER (CAR A) C) (COND ((ATOM (CAR A)) (TH1 (COND ((MEMBER (CAR A) A1) A1) (T (CONS (CAR A) A1))) A2 (CDR A) C)) (T (TH1 A1 (COND ((MEMBER (CAR A) A2) A2) (T (CONS (CAR A) A2))) (CDR A) C)))))))) 
Nowadays we might indent it like so:
(TH1 (LAMBDA (A1 A2 A C) (COND ((NULL A) (TH2 A1 A2 NIL NIL C)) (T (OR (MEMBER (CAR A) C) (COND ((ATOM (CAR A)) (TH1 (COND ((MEMBER (CAR A) A1) A1) (T (CONS (CAR A) A1))) A2 (CDR A) C)) (T (TH1 A1 (COND ((MEMBER (CAR A) A2) A2) (T (CONS (CAR A) A2))) (CDR A) C)))))))) 
Part of the lack of formatting stems probably from the primarily punched-card-based programming world of the time; you would see the indented structure only by printing a listing of your code, so there is no need to format the punched cards carefully. LISP 1.5 allowed a very free format, especially when compared to FORTRAN; the consequence is that early LISP 1.5 programs are very difficult to read because of the lack of spacing, while old FORTRAN programs are limited at least to one statement per line.
The close relationship of Lisp and pretty-printing originates in programs developed to produce nicely formatted listings of Lisp code.
Lisp code from the mid-sixties used some peculiar formatting conventions that seem odd today. Here is a quote from Steele and Gabriel's Evolution of Lisp:
This intermediate example is derived from a 1966 coding style:
DEFINE(( (MEMBER (LAMBDA (A X) (COND ((NULL X) F) ((EQ A (CAR X) ) T) (T (MEMBER A (CDR X))) ))) )) 
The design of this style appears to take the name of the function, the arguments, and the very beginning of the COND as an idiom, and hence they are on the same line together. The branches of the COND clause line up, which shows the structure of the cases considered.
This kind of indentation is somewhat reminiscent of the formatting of Algol programs in publications.
Programming style. Old LISP 1.5 programs can seem somewhat primitive. There is heavy use of the prog feature, which is related partially to the programming style that was common at the time and partially to the lack of control structures in LISP 1.5. You could express iteration only by using recursion or by using prog+go; there wasn't a built-in looping facility. There is a library function called for that is something like the early form of Maclisp's do (the later form would be inherited in Common Lisp), but no surviving LISP 1.5 code uses it. [I'm thinking of making another post about converting programs using prog to the more structured forms that Common Lisp supports, if doing so would make the logic of the program clearer. Naturally there is a lot of literature on so called "goto elimination" and doing it automatically, so it would not present any new knowledge, but it would have lots of Lisp examples.]
LISP 1.5 did not have a let construct. You would use either a prog and setq or a lambda:
(let ((x y)) ...) 
is equivalent to
((lambda (x) ...) y) 
Something that stands out immediately when reading LISP 1.5 code is the heavy, heavy use of combinations of car and cdr. This might help (though car and cdr should be left alone when they are used with dotted pairs):
(car x) = (first x) (cdr x) = (rest x) (caar x) = (first (first x)) (cadr x) = (second x) (cdar x) = (rest (first x)) (cddr x) = (rest (rest x)) (caaar x) = (first (first (first x))) (caadr x) = (first (second x)) (cadar x) = (second (first x)) (caddr x) = (third x) (cdaar x) = (rest (first (first x))) (cdadr x) = (rest (second x)) (cddar x) = (rest (rest (first x))) (cdddr x) = (rest (rest (rest x))) 
Here are some higher compositions, even though LISP 1.5 doesn't have them.
(caaaar x) = (first (first (first (first x)))) (caaadr x) = (first (first (second x))) (caadar x) = (first (second (first x))) (caaddr x) = (first (third x)) (cadaar x) = (second (first (first x))) (cadadr x) = (second (second x)) (caddar x) = (third (first x)) (cadddr x) = (fourth x) (cdaaar x) = (rest (first (first (first x)))) (cdaadr x) = (rest (first (second x))) (cdadar x) = (rest (second (first x))) (cdaddr x) = (rest (third x)) (cddaar x) = (rest (rest (first (first x)))) (cddadr x) = (rest (rest (second x))) (cdddar x) = (rest (rest (rest (first x)))) (cddddr x) = (rest (rest (rest (rest x)))) 
Things like defstruct and Flavors were many years away. For a long time, Lisp dialects had lists as the only kind of structured data, and programmers rarely defined functions with meaningful names to access components of data structures that are represented as lists. Part of understanding old Lisp code is figuring out how data structures are built up and what their components signify.
In LISP 1.5, it's fairly common to see nil used where today we'd use (). For example:
(LAMBDA NIL ...) 
instead of
(LAMBDA () ...) 
or (PROG NIL ...)
instead of
(PROG () ...) 
Actually this practice was used in other Lisp dialects as well, although it isn't really seen in newer code.
Identifiers. If you examine the list of all the symbols described in the LISP 1.5 Programmer's Manual, you will notice that none of them differ only in the characters after the sixth character. In other words, it is as if symbol names have only six significant characters, so that abcdef1 and abcdef2 would be considered equal. But it doesn't seem like that was actually the case, since there is no mention of such a limitation in the manual. Another thing of note is that many symbols are six characters or fewer in length.
(A sequence of six characters is nice to store on the hardware on which LISP 1.5 was running. The processor used thirty-six-bit words, and characters were six-bit; therefore six characters fit in a single word. It is conceivable that it might be more efficient to search for names that take only a single word to store than for names that take more than one word to store, but I don't know enough about the computer or implementation of LISP 1.5 to know if that's true.)
Even though the limit on names was thirty characters (the longest symbol names in standard Common Lisp are update-instance-for-different-class and update-instance-for-redefined-class, both thirty-five characters in length), only a few of the LISP 1.5 names are not abbreviated. Things like terpri ("terminate print") and even car and cdr ("contents of adress part of register" and "contents of decrement part of register"), which have stuck around until today, are pretty inscrutable if you don't know what they mean.
Thankfully the modern style is to limit abbreviations. Comparing the names that were introduced in Common Lisp versus those that have survived from LISP 1.5 (see the "Library" section below) shows a clear preference for good naming in Common Lisp, even at the risk of lengthy names. The multiple-value-bind operator could easily have been named mv-bind, but it wasn't.

Fundamental differences.

Truth values. Common Lisp has a single value considered to be false, which happens to be the same as the empty list. It can be represented either by the symbol nil or by (); either of these may be quoted with no difference in meaning. Anything else, when considered as a boolean, is true; however, there is a self-evaluating symbol, t, that traditionally is used as the truth value whenever there is no other more appropriate one to use.
In LISP 1.5, the situation was similar: Just like Common Lisp, nil or the empty list are false and everything else is true. But the symbol nil was used by programmers only as the empty list; another symbol, f, was used as the boolean false. It turns out that f is actually a constant whose value is nil. LISP 1.5 had a truth symbol t, like Common Lisp, but it wasn't self-evaluating. Instead, it was a constant whose permanent value was *t*, which was self-evaluating. The following code will set things up so that the LISP 1.5 constants work properly:
(defconstant *t* t) ; (eq *t* t) is true (defconstant f nil) 
Recall the practice in older Lisp code that was mentioned above of using nil in forms like (lambda nil ...) and (prog nil ...), where today we would probably use (). Perhaps this usage is related to the fact that nil represented an empty list more than it did a false value; or perhaps the fact that it seems so odd to us now is related to the fact that there is even less of a distinction between nil the empty list and nil the false value in Common Lisp (there is no separate f constant).
Function storage. In Common Lisp, when you define a function with defun, that definition gets stored somehow in the global environment. LISP 1.5 stores functions in a much simpler way: A function definition goes on the property list of the symbol naming it. The indicator under which the definition is stored is either expr or fexpr or subr or fsubr. The expr/fexpr indicators were used when the function was interpreted (written in Lisp); the subr/fsubr indicators were used when the function was compiled (or written in machine code). Functions can be referred to based on the property under which their definitions are stored; for example, if a function named f has a definition written in Lisp, we might say that "f is an expr."
When a function is interpreted, its lambda expression is what is stored. When a function is compiled or machine coded, a pointer to its address in memory is what is stored.
The choice between expr and fexpr and between subr and fsubr is based on evaluation. Functions that are exprs and subrs are evaluated normally; for example, an expr is effectively replaced by its lambda expression. But when an fexpr or an fsubr is to be processed, the arguments are not evaluated. Instead they are put in a list. The fexpr or fsubr definition is then passed that list and the current environment. The reason for the latter is so that the arguments can be selectively evaluated using eval (which took a second argument containing the environment in which evaluation is to occur). Here is an example of what the definition of an fexpr might look like, LISP 1.5 style. This function takes any number of arguments and prints them all, returning nil.
(LAMBDA (A E) (PROG () LOOP (PRINT (EVAL (CAR A) E)) (COND ((NULL (CDR A)) (RETURN NIL))) (SETQ A (CDR A)) (GO LOOP))) 
The "f" in "fexpr" and "fsubr" seems to stand for "form", since fexpr and fsubr functions got passed a whole form.
The top level: evalquote. In Common Lisp, the interpreter is usually available interactively in the form of a "Read-Evaluate-Print-Loop", for which a common abbreviation is "REPL". Its structure is exactly as you would expect from that name: Repeatedly read a form, evaluate it (using eval), and print the results. Note that this model is the same as top level file processing, except that the results of only the last form are printed, when it's done.
In LISP 1.5, the top level is not eval, but evalquote. Here is how you could implement evalquote in Common Lisp:
(defun evalquote (operator arguments) (eval (cons operator arguments))) 
LISP 1.5 programs commonly look like this (define takes a list of function definitions):
DEFINE (( (FUNCTION1 (LAMBDA () ...)) (FUNCTION2 (LAMBDA () ...)) ... )) 
which evalquote would process as though it had been written
(DEFINE ( (FUNCTION1 (LAMBDA () ...)) (FUNCTION2 (LAMBDA () ...)) ... )) 
Evaluation, scope, extent. Before further discussion, here the evaluator for LISP 1.5 as presented in Appendix B, translated from m-expressions to approximate Common Lisp syntax. This code won't run as it is, but it should give you an idea of how the LISP 1.5 interpreter worked.
(defun evalquote (function arguments) (if (atom function) (if (or (get function 'fexpr) (get function 'fsubr)) (eval (cons function arguments) nil)) (apply function arguments nil))) (defun apply (function arguments environment) (cond ((null function) nil) ((atom function) (let ((expr (get function 'expr)) (subr (get function 'subr))) (cond (expr (apply expr arguments environment)) (subr ; see below ) (t (apply (cdr (sassoc function environment (lambda () (error "A2")))) arguments environment))))) ((eq (car function 'label)) (apply (caddr function) arguments (cons (cons (cadr function) (caddr function)) arguments))) ((eq (car function) 'funarg) (apply (cadr function) arguments (caddr function))) ((eq (car function) 'lambda) (eval (caddr function) (nconc (pair (cadr function) arguments) environment))) (t (apply (eval function environment) arguments environment)))) (defun eval (form environment) (cond ((null form) nil) ((numberp form) form) ((atom form) (let ((apval (get atom 'apval))) (if apval (car apval) (cdr (sassoc form environment (lambda () (error "A8"))))))) ((eq (car form) 'quote) (cadr form)) ((eq (car form) 'function) (list 'funarg (cadr form) environment)) ((eq (car form) 'cond) (evcon (cdr form) environment)) ((atom (car form)) (let ((expr (get (car form) 'expr)) (fexpr (get (car form) 'fexpr)) (subr (get (car form) 'subr)) (fsubr (get (car form) 'fsubr))) (cond (expr (apply expr (evlis (cdr form) environment) environment)) (fexpr (apply fexpr (list (cdr form) environment) environment)) (subr ; see below ) (fsubr ; see below ) (t (eval (cons (cdr (sassoc (car form) environment (lambda () (error "A9")))) (cdr form)) environment))))) (t (apply (car form) (evlis (cdr form) environment) environment)))) (defun evcon (cond environment) (cond ((null cond) (error "A3")) ((eval (caar cond) environment) (eval (cadar cond) environment)) (t (evcon (cdr cond) environment)))) (defun evlis (list environment) (maplist (lambda (j) (eval (car j) environment)) list)) 
(The definition of evalquote earlier was a simplification to avoid the special case of special operators in it. LISP 1.5's apply can't handle special operators (which is also true of Common Lisp's apply). Hopefully the little white lie can be forgiven.)
There are several things to note about these definitions. First, it should be reiterated that they will not run in Common Lisp, for many reasons. Second, in evcon an error has been corrected; the original says in the consequent of the second branch (effectively)
(eval (cadar environment) environment) 
Now to address the "see below" comments. In the manual it describes the actions of the interpreter as calling a function called spread, which takes the arguments given in a Lisp function call and puts them into the machine registers expected with LISP 1.5's calling convention, and then executes an unconditional branch instruction after updating the value of a variable called $ALIST to the environment passed to eval or to apply. In the case of fsubr, instead of calling spread, since the function will always get two arguments, it places them directly in the registers.
You will note that apply is considered to be a part of the evaluator, while in Common Lisp apply and eval are quite different. Here it takes an environment as its final argument, just like eval. This fact highlights an incredibly important difference between LISP 1.5 and Common Lisp: When a function is executed in LISP 1.5, it is run in the environment of the function calling it. In contrast, Common Lisp creates a new lexical environment whenever a function is called. To exemplify the differences, the following code, if Common Lisp were evaluated like LISP 1.5, would be valid:
(defun weird (a b) (other-weird 5)) (defun other-weird (n) (+ a b n)) 
In Common Lisp, the function weird creates a lexical environment with two variables (the parameters a and b), which have lexical scope and indefinite extent. Since the body of other-weird is not lexically within the form that binds a and b, trying to make reference to those variables is incorrect. You can thwart Common Lisp's lexical scoping by declaring those variables to have indefinite scope:
(defun weird (a b) (declare (special a b)) (other-weird 5)) (defun other-weird (n) (declare (special a b)) (+ a b n)) 
The special declaration tells the implementation that the variables a and b are to have indefinite scope and dynamic extent.
Let's talk now about the funarg branch of apply. The function/funarg device was introduced some time in the sixties in an attempt to solve the scoping problem exemplified by the following problematic definition (using Common Lisp syntax):
(defun testr (x p f u) (cond ((funcall p x) (funcall f x)) ((atom x) (funcall u)) (t (testr (cdr x) p f (lambda () (testr (car x) p f u)))))) 
This function is taken from page 11 of John McCarthy's History of Lisp.
The only problematic part is the (car x) in the lambda in the final branch. The LISP 1.5 evaluator does little more than textual substitution when applying functions; therefore (car x) will refer to whatever x is currently bound whenever the function (lambda expression) is applied, not when it is written.
How do you fix this issue? The solution employed in LISP 1.5 was to capture the environment present when the function expression is written, using the function operator. When the evaluator encounters a form that looks like (function f), it converts it into (funarg f environment), where environment is the current environment during that call to eval. Then when apply gets a funarg form, it applies the function in the environment stored in the funarg form instead of the environment passed to apply.
Something interesting arises as a consequence of how the evaluator works. Common Lisp, as is well known, has two separate name spaces for functions and for variables. If a Common Lisp implementation encounters
(lambda (f x) (f x)) 
the result is not a function applying one of its arguments to its other argument, but rather a function applying a function named f to its second argument. You have to use an operator like funcall or apply to use the functional value of the f parameter. If there is no function named f, then you will get an error. In contrast, LISP 1.5 will eventually find the parameter f and apply its functional value, if there isn't a function named f—but it will check for a function definition first. If a Lisp dialect that has a single name space is called a "Lisp-1", and one that has two name spaces is called a "Lisp-2", then I guess you could call LISP 1.5 a "Lisp-1.5"!
How can we deal with indefinite scope when trying to get LISP 1.5 programs to run in Common Lisp? Well, with any luck it won't matter; ideally the program does not have any references to variables that would be out of scope in Common Lisp. However, if there are such references, there is a fairly simple fix: Add special declarations everywhere. For example, say that we have the following (contrived) program, in which define has been translated into defun forms to make it simpler to deal with:
(defun f (x) (prog (m) (setq m a) (setq a 7) (return (+ m b x)))) (defun g (l) (h (* b a))) (defun h (i) (/ l (f (setq b (setq a i))))) (defun p () (prog (a b i) (setq a 4) (setq b 6) (setq i 3) (return (g (f 10))))) 
The result of calling p should be 10/63. To make it work, add special declarations wherever necessary:
(defun f (x) (declare (special a b)) (prog (m) (setq m a) (setq a 7) (return (+ m b x)))) (defun g (l) (declare (special a b l)) (h (* b a))) (defun h (i) (declare (special a b l i)) (/ l (f (setq b (setq a i))))) (defun p () (prog (a b i) (declare (special a b i)) (setq a 4) (setq b 6) (setq i 3) (return (g (f 10))))) 
Be careful about the placement of the declarations. It is required that the one in p be inside the prog, since that is where the variables are bound; putting it at the beginning (i.e., before the prog) would do nothing because the prog would create new lexical bindings.
This method is not optimal, since it really doesn't help too much with understanding how the code works (although being able to see which variables are free and which are bound, by looking at the declarations, is very helpful). A better way would be to factor out the variables used among several functions (as long as you are sure that it is used in only those functions) and put them in a let. Doing that is more difficult than using global variables, but it leads to code that is easier to reason about. Of course, if a variable is used in a large number of functions, it might well be a better choice to create a global variable with defvar or defparameter.
Not all LISP 1.5 code is as bad as that example!
Join us next time as we look at the LISP 1.5 library. In the future, I think I'll make some posts talking about getting specific programs running. If you see any errors, please let me know.
submitted by kushcomabemybedtime to lisp [link] [comments]

Part 2: Tools & Info for Sysadmins - Mega List of Tips, Tools, Books, Blogs & More

(continued from part 1)
Unlocker is a tool to help delete those irritating locked files that give you an error message like "cannot delete file" or "access is denied." It helps with killing processes, unloading DLLs, deleting index.dat files, as well as unlocking, deleting, renaming, and moving locked files—typically without requiring a reboot.
IIS Crypto's newest version adds advanced settings; registry backup; new, simpler templates; support for Windows Server 2019 and more. This tool lets you enable or disable protocols, ciphers, hashes and key exchange algorithms on Windows and reorder SSL/TLS cipher suites from IIS, change advanced settings, implement best practices with a single click, create custom templates and test your website. Available in both command line and GUI versions.
RocketDock is an application launcher with a clean interface that lets you drag/drop shortcuts for easy access and minimize windows to the dock. Features running application indicators, multi-monitor support, alpha-blended PNG and ICO icons, auto-hide and popup on mouse over, positioning and layering options. Fully customizable, portable, and compatible with MobyDock, ObjectDock, RK Launcher and Y'z Dock skins. Works even on slower computers and is Unicode compliant. Suggested by lieutenantcigarette: "If you like the dock on MacOS but prefer to use Windows, RocketDock has you covered. A superb and highly customisable dock that you can add your favourites to for easy and elegant access."
Baby FTP Server offers only the basics, but with the power to serve as a foundation for a more-complex server. Features include multi-threading, a real-time server log, support for PASV and non-PASV mode, ability to set permissions for download/upload/rename/delete/create directory. Only allows anonymous connections. Our thanks to FatherPrax for suggesting this one.
Strace is a Linux diagnostic, debugging and instructional userspace tool with a traditional command-line interface. Uses the ptrace kernel feature to monitor and tamper with interactions between processes and the kernel, including system calls, signal deliveries and changes of process state.
exa is a small, fast replacement for ls with more features and better defaults. It uses colors to distinguish file types and metadata, and it recognizes symlinks, extended attributes and Git. All in one single binary. phils_lab describes it as "'ls' on steroids, written in Rust."
rsync is a faster file transfer program for Unix to bring remote files into sync. It sends just the differences in the files across the link, without requiring both sets of files to be present at one of the ends. Suggested by zorinlynx, who adds that "rsync is GODLY for moving data around efficiently. And if an rsync is interrupted, just run it again."
Matter Wiki is a simple WYSIWYG wiki that can help teams store and collaborate. Every article gets filed under a topic, transparently, so you can tell who made what changes to which document and when. Thanks to bciar-iwdc for the recommendation.
LockHunter is a file unlocking tool that enables you to delete files that are being blocked for unknown reasons. Can be useful for fighting malware and other programs that are causing trouble. Deletes files into the recycle bin so you can restore them if necessary. Chucky2401 finds it preferable to Unlocker, "since I am on Windows 7. There are no new updates since July 2017, but the last beta was in June of this year."
aria2 is a lightweight multi-source command-line download utility that supports HTTP/HTTPS, FTP, SFTP, BitTorrent and Metalink. It can be manipulated via built-in JSON-RPC and XML-RPC interfaces. Recommended by jftuga, who appreciates it as a "cross-platform command line downloader (similar to wget or curl), but with the -x option can run a segmented download of a single file to increase throughput."
Free Services
Temp-Mail allows you to receive email at a temporary address that self-destructs after a certain period of time. Outwit all the forums, Wi-Fi owners, websites and blogs that insist you register to use them. Petti-The-Yeti says, "I don't give any company my direct email anymore. If I want to trial something but they ask for an email signup, I just grab a temporary email from here, sign up with it, and wait for the trial link or license info to come through. Then, you just download the file and close the website."
Duck DNS will point a DNS (sub domains of duckdns.org) to an IP of your choice. DDNS is a handy way for you to refer to a serverouter with an easily rememberable name for situations when the server's ip address will likely change. Suggested by xgnarf, who finds it "so much better for the free tier of noip—no 30-day nag to keep your host up."
Joe Sandbox detects and analyzes potential malicious files and URLs on Windows, Android, Mac OS, Linux and iOS for suspicious activities. It performs deep malware analysis and generates comprehensive and detailed reports. The Community Edition of Joe Sandbox Cloud allows you to run a maximum of 6 analyses per month, 3 per day on Windows, Linux and Android with limited analysis output. This one is from dangibbons94, who wanted to "share this cool service ... for malware analysis. I usually use Virus total for URL scanning, but this goes a lot more in depth. I just used basic analysis, which is free and enough for my needs."
Hybrid Analysis is a malware analysis service that detects and analyzes unknown threats for the community. This one was suggested by compupheonix, who adds that it "gets you super detailed reports... it's about the most fleshed out and detailed one I can find."
JustBeamIt is a file-transfer service that allows you to send files of any size via a peer-to-peer streaming model. Simply drag and drop your file and specify the recipient's email address. They will then receive a link that will trigger the download directly from your computer, so the file does not have to be uploaded to the service itself. The link is good for one download and expires after 10 minutes. Thanks to cooljacob204sfw for the recommendation!
ShieldsUP is a quick but powerful internet security checkup and information service. It was created by security researcher Steve Gibson to scan ports and let you know which ones have been opened through your firewalls or NAT routers.
Firefox Send is an encrypted file transfer service that allows you to share files up to 2.5GB from any browser or an Android app. Uses end-to-end encryption to keep data secure and offers security controls you can set. You can determine when your file link expires, the number of downloads, and whether to add a password. Your recipient receives a link to download the file, and they don’t need a Firefox account. This one comes from DePingus, who appreciates the focus on privacy. "They have E2E, expiring links, and a clear privacy policy."
Free DNS is a service where programmers share domain names with one another at no cost. Offers free hosting as well as dynamic DNS, static DNS, subdomain and domain hosting. They can host your domain's DNS as well as allowing you to register hostnames from domains they're hosting already. If you don't have a domain, you can sign up for a free account and create up to 5 subdomains off the domains others have contributed and point these hosts anywhere on the Internet. Thanks to 0x000000000000004C (yes, that's a username) for the suggestion!
ANY.RUN is an interactive malware analysis service for dynamic and static research of the majority of threats in any environment. It can provide a convenient in-depth analysis of new, unidentified malicious objects and help with the investigation of incidents. ImAshtonTurner appreciates it as "a great sandbox tool for viewing malware, etc."
Plik is a scalable, temporary file upload system similar to wetransfer that is written in golang. Thanks go to I_eat_Narwhals for this one!
Free My IP offers free, dynamic DNS. This service comes with no login, no ads, no newsletters, no links to click and no hassle. Kindly suggested by Jack of All Trades.
Mailinator provides free, temporary email inboxes on a receive-only, attachment-free system that requires no sign-up. All @mailinator.com addresses are public, readable and discoverable by anyone at any time—but are automatically deleted after a few hours. Can be a nice option for times when you to give out an address that won't be accessible longterm. Recommended by nachomountain, who's been using it "for years."
Magic Wormhole is a service for sending files directly with no intermediate upload, no web interface and no login. When both parties are online you with the minimal software installed, the wormhole is invoked via command line identifying the file you want to send. The server then provides a speakable, one-time-use password that you give the recipient. When they enter that password in their wormhole console, key exchange occurs and the download begins directly between your computers. rjohnson99 explains, "Magic Wormhole is sort of like JustBeamIt but is open-source and is built on Python. I use it a lot on Linux servers."
EveryCloud's Free Phish is our own, new Phishing Simulator. Once you've filled in the form and logged in, you can choose from lots of email templates (many of which we've coped from what we see in our Email Security business) and landing pages. Run a one-off free phish, then see who clicked or submitted data so you can understand where your organization is vulnerable and act accordingly.
Hardening Guides
CIS Hardening Guides contain the system security benchmarks developed by a global community of cybersecurity experts. Over 140 configuration guidelines are provided to help safeguard systems against threats. Recommended by cyanghost109 "to get a start on looking at hardening your own systems."
Podcasts
Daily Tech News is Tom Merrit's show covering the latest tech issues with some of the top experts in the field. With the focus on daily tech news and analysis, it's a great way to stay current. Thanks to EmoPolarbear for drawing it to our attention.
This Week in Enterprise Tech is a podcast that features IT experts explaining the complicated details of cutting-edge enterprise technology. Join host Lou Maresca on this informative exploration of enterprise solutions, with new episodes recorded every Friday afternoon.
Security Weekly is a podcast where a "bunch of security nerds" get together and talk shop. Topics are greatly varied, and the atmosphere is relaxed and conversational. The show typically tops out at 2 hours, which is perfect for those with a long commute. If you’re fascinated by discussion of deep technical and security-related topics, this may be a nice addition to your podcast repertoire.
Grumpy Old Geeks—What Went Wrong on the Internet and Who's To Blame is a podcast about the internet, technology and geek culture—among other things. The hosts bring their grumpy brand of humor to the "state of the world as they see it" in these roughly hour-long weekly episodes. Recommended by mkaxsnyder, who enjoys it because, "They are a good team that talk about recent and relevant topics from an IT perspective."
The Social-Engineer Podcast is a monthly discussion among the hosts—a group of security experts from SEORG—and a diverse assortment of guests. Topics focus around human behavior and how it affects information security, with new episodes released on the second Monday of every month. Thanks to MrAshRhodes for the suggestion.
The CyberWire podcasts discuss what's happening in cyberspace, providing news and commentary from industry experts. This cyber security-focused news service delivers concise, accessible, and relevant content without the gossip, sensationalism, and the marketing buzz that often distract from the stories that really matter. Appreciation to supermicromainboard for the suggestion.
Malicious Life is a podcast that tells the fascinating—and often unknown—stories of the wildest hacks you can ever imagine. Host Ran Levi, a cybersecurity expert and author, talks with the people who were actually involved to reveal the history of each event in depth. Our appreciation goes to peraphon for the recommendation.
The Broadcast Storm is a podcast for Cisco networking professionals. BluePieceOfPaper suggests it "for people studying for their CCNA/NP. Kevin Wallace is a CCIE Collaboration so he knows his *ishk. Good format for learning too. Most podcasts are about 8-15 mins long and its 'usually' an exam topic. It will be something like "HSPR" but instead of just explaining it super boring like Ben Stein reading a powerpoint, he usually goes into a story about how (insert time in his career) HSPR would have been super useful..."
Software Engineering Radio is a podcast for developers who are looking for an educational resource with original content that isn't recycled from other venues. Consists of conversations on relevant topics with experts from the software engineering world, with new episodes released three to four times per month. a9JDvXLWHumjaC tells us this is "a solid podcast for devs."
Books
System Center 2012 Configuration Manager is a comprehensive technical guide designed to help you optimize Microsoft's Configuration Manager 2012 according to your requirements and then to deploy and use it successfully. This methodical, step-by-step reference covers: the intentions behind the product and its role in the broader System Center product suite; planning, design, and implementation; and details on each of the most-important feature sets. Learn how to leverage the user-centric capabilities to provide anytime/anywhere services & software, while strengthening control and improving compliance.
Network Warrior: Everything You Need to Know That Wasn’t on the CCNA Exam is a practical guide to network infrastructure. Provides an in-depth view of routers and routing, switching (with Cisco Catalyst and Nexus switches as examples), SOHO VoIP and SOHO wireless access point design and configuration, introduction to IPv6 with configuration examples, telecom technologies in the data-networking world (including T1, DS3, frame relay, and MPLS), security, firewall theory and configuration, ACL and authentication, Quality of Service (QoS), with an emphasis on low-latency queuing (LLQ), IP address allocation, Network Time Protocol (NTP) and device failures.
Beginning the Linux Command Line is your ally in mastering Linux from the keyboard. It is intended for system administrators, software developers, and enthusiastic users who want a guide that will be useful for most distributions—i.e., all items have been checked against Ubuntu, Red Hat and SUSE. Addresses administering users and security and deploying firewalls. Updated to the latest versions of Linux to cover files and directories, including the Btrfs file system and its management and systemd boot procedure and firewall management with firewalld.
Modern Operating Systems, 4th Ed. is written for students taking intro courses on Operating Systems and for those who want an OS reference guide for work. The author, an OS researcher, includes both the latest materials on relevant operating systems as well as current research. The previous edition of Modern Operating Systems received the 2010 McGuffey Longevity Award that recognizes textbooks for excellence over time.
Time Management for System Administrators is a guide for organizing your approach to this challenging role in a way that improves your results. Bestselling author Thomas Limoncelli offers a collection of tips and techniques for navigating the competing goals and concurrent responsibilities that go along with working on large projects while also taking care of individual user's needs. The book focuses on strategies to help with daily tasks that will also allow you to handle the critical situations that inevitably require your attention. You'll learn how to manage interruptions, eliminate time wasters, keep an effective calendar, develop routines and prioritize, stay focused on the task at hand and document/automate to speed processes.
The Practice of System and Network Administration, 3rd Edition introduces beginners to advanced frameworks while serving as a guide to best practices in system administration that is helpful for even the most advanced experts. Organized into four major sections that build from the foundational elements of system administration through improved techniques for upgrades and change management to exploring assorted management topics. Covers the basics and then moves onto the advanced things that can be built on top of those basics to wield real power and execute difficult projects.
Learn Windows PowerShell in a Month of Lunches, Third Edition is designed to teach you PowerShell in a month's worth of 1-hour lessons. This updated edition covers PowerShell features that run on Windows 7, Windows Server 2008 R2 and later, PowerShell v3 and later, and it includes v5 features like PowerShellGet. For PowerShell v3 and up, Windows 7 and Windows Server 2008 R2 and later.
Troubleshooting with the Windows Sysinternals Tools is a guide to the powerful Sysinternals tools for diagnosing and troubleshooting issues. Sysinternals creator Mark Russinovich and Windows expert Aaron Margosis provide a deep understanding of Windows core concepts that aren’t well-documented elsewhere along with details on how to use Sysinternals tools to optimize any Windows system’s reliability, efficiency, performance and security. Includes an explanation of Sysinternals capabilities, details on each major tool, and examples of how the tools can be used to solve real-world cases involving error messages, hangs, sluggishness, malware infections and more.
DNS and BIND, 5th Ed. explains how to work with the Internet's distributed host information database—which is responsible for translating names into addresses, routing mail to its proper destination, and listing phone numbers according to the ENUM standard. Covers BIND 9.3.2 & 8.4.7, the what/how/why of DNS, name servers, MX records, subdividing domains (parenting), DNSSEC, TSIG, troubleshooting and more. PEPCK tells us this is "generally considered the DNS reference book (aside from the RFCs of course!)"
Windows PowerShell in Action, 3rd Ed. is a comprehensive guide to PowerShell. Written by language designer Bruce Payette and MVP Richard Siddaway, this volume gives a great introduction to Powershell, including everyday use cases and detailed examples for more-advanced topics like performance and module architecture. Covers workflows and classes, writing modules and scripts, desired state configuration and programming APIs/pipelines.This edition has been updated for PowerShell v6.
Zero Trust Networks: Building Secure Systems in Untrusted Networks explains the principles behind zero trust architecture, along with what's needed to implement it. Covers the evolution of perimeter-based defenses and how they evolved into the current broken model, case studies of zero trust in production networks on both the client and server side, example configurations for open-source tools that are useful for building a zero trust network and how to migrate from a perimeter-based network to a zero trust network in production. Kindly recommended by jaginfosec.
Tips
Here are a couple handy Windows shortcuts:
Here's a shortcut for a 4-pane explorer in Windows without installing 3rd-party software:
(Keep the win key down for the arrows, and no pauses.) Appreciation goes to ZAFJB for this one.
Our recent tip for a shortcut to get a 4-pane explorer in Windows, triggered this suggestion from SevaraB: "You can do that for an even larger grid of Windows by right-clicking the clock in the taskbar, and clicking 'Show windows side by side' to arrange them neatly. Did this for 4 rows of 6 windows when I had to have a quick 'n' dirty "video wall" of windows monitoring servers at our branches." ZAFJB adds that it actually works when you right-click "anywhere on the taskbar, except application icons or start button."
This tip comes courtesy of shipsass: "When I need to use Windows Explorer but I don't want to take my hands off the keyboard, I press Windows-E to launch Explorer and then Ctrl-L to jump to the address line and type my path. The Ctrl-L trick also works with any web browser, and it's an efficient way of talking less-technical people through instructions when 'browse to [location]' stumps them."
Clear browser history/cookies by pressing CTRL-SHIFT-DELETE on most major browsers. Thanks go to synapticpanda, who adds that this "saves me so much time when troubleshooting web apps where I am playing with the cache and such."
To rename a file with F2, while still editing the name of that file: Hit TAB to tab into the renaming of the next file. Thanks to abeeftaco for this one!
Alt-D is a reliable alternative to Ctrl-L for jumping to the address line in a browser. Thanks for this one go to fencepost_ajm, who explains: "Ctrl-L comes from the browser side as a shortcut for Location, Alt-D from the Windows Explorer side for Directory."
Browser shortcut: When typing a URL that ends with dot com, Ctrl + Enter will place the ".com" and take you to the page. Thanks to wpierre for this one!
This tip comes from anynonus, as something that daily that saves a few clicks: "Running a program with ctrl + shift + enter from start menu will start it as administrator (alt + y will select YES to run as admin) ... my user account is local admin [so] I don't feel like that is unsafe"
Building on our PowerShell resources, we received the following suggestion from halbaradkenafin: aka.ms/pskoans is "a way to learn PowerShell using PowerShell (and Pester). It's really cool and a bunch of folks have high praise for it (including a few teams within MSFT)."
Keyboard shortcut: If you already have an application open, hold ctrl + shift and middle click on the application in your task bar to open another instance as admin. Thanks go to Polymira for this one.
Remote Server Tip: "Critical advice. When testing out network configuration changes, prior to restarting the networking service or rebooting, always create a cron job that will restore your original network configuration and then reboot/restart networking on the machine after 5 minutes. If your config worked, you have enough time to remove it. If it didn't, it will fix itself. This is a beautifully simple solution that I learned from my old mentor at my very first job. I've held on to it for a long time." Thanks go to FrigidNox for the tip!
Websites
Deployment Research is the website of Johan Arwidmark, MS MVP in System Center Cloud and Datacenter Management. It is dedicated to sharing information and guidance around System Center, OS deployment, migration and more. The author shares tips and tricks to help improve the quality of IT Pros’ daily work.
Next of Windows is a website on (mostly) Microsoft-related technology. It's the place where Kent Chen—a computer veteran with many years of field experience—and Jonathan Hu—a web/mobile app developer and self-described "cool geek"—share what they know, what they learn and what they find in the hope of helping others learn and benefit.
High Scalability brings together all the relevant information about building scalable websites in one place. Because building a website with confidence requires a body of knowledge that can be slow to develop, the site focuses on moving visitors along the learning curve at a faster pace.
Information Technology Research Library is a great resource for IT-related research, white papers, reports, case studies, magazines, and eBooks. This library is provided at no charge by TradePub.com. GullibleDetective tells us it offers "free PDF files from a WIIIIIIDE variety of topics, not even just IT. Only caveat: as its a vendor-supported publishing company, you will have to give them a bit of information such as name, email address and possibly a company name. You undoubtedly have the ability to create fake information on this, mind you. The articles range from Excel templates, learning python, powershell, nosql etc. to converged architecture."
SS64 is a web-based reference guide for syntax and examples of the most-common database and OS computing commands. Recommended by Petti-The-Yeti, who adds, "I use this site all the time to look up commands and find examples while I'm building CMD and PS1 scripts."
Phishing and Malware Reporting. This website helps you put a stop to scams by getting fraudulent pages blocked. Easily report phishing webpages so they can be added to blacklists in as little as 15 minutes of your report. "Player024 tells us, "I highly recommend anyone in the industry to bookmark this page...With an average of about 10 minutes of work, I'm usually able to take down the phishing pages we receive thanks to the links posted on that website."
A Slack Channel
Windows Admin Slack is a great drive-by resource for the Windows sysadmin. This team has 33 public channels in total that cover different areas of helpful content on Windows administration.
Blogs
KC's Blog is the place where Microsoft MVP and web developer Kent Chen shares his IT insights and discoveries. The rather large library of posts offer helpful hints, how-tos, resources and news of interest to those in the Windows world.
The Windows Server Daily is the ever-current blog of technologist Katherine Moss, VP of open source & community engagement for StormlightTech. Offers brief daily posts on topics related to Windows server, Windows 10 and Administration.
An Infosec Slideshow
This security training slideshow was created for use during a quarterly infosec class. The content is offered generously by shalafi71, who adds, "Take this as a skeleton and flesh it out on your own. Take an hour or two and research the things I talk about. Tailor this to your own environment and users. Make it relevant to your people. Include corporate stories, include your audience, exclude yourself. This ain't about how smart you are at infosec, and I can't stress this enough, talk about how people can defend themselves. Give them things to look for and action they can take. No one gives a shit about your firewall rules."
Tech Tutorials
Tutorialspoint Library. This large collection of tech tutorials is a great resource for online learning. You'll find nearly 150 high-quality tutorials covering a wide array of languages and topics—from fundamentals to cutting-edge technologies. For example, this Powershell tutorial is designed for those with practical experience handling Windows-based Servers who want to learn how to install and use Windows Server 2012.
The Python Tutorial is a nice introduction to many of Python’s best features, enabling you to read and write Python modules and programs. It offers an understanding of the language's style and prepares you to learn more about the various Python library modules described in 'The Python Standard Library.' Kindly suggested by sharjeelsayed.
SysAdmin Humor
Day in the Life of a SysAdmin Episode 5: Lunch Break is an amusing look at a SysAdmin's attempt to take a brief lunch break. We imagine many of you can relate!
Have a fantastic week and as usual, let me know any comments or suggestions.
u/crispyducks
submitted by crispyducks to sysadmin [link] [comments]

[Spoilers] So, I promised to write a tirade on what I think is wrong with CDDA, and how I'd refocus the game on a GD level, and here it is.

And do not get me wrong, it absolutely still is my #1 favorite game, it just has some.. really, really major glaring flaws. Let me pre-emptively apologize for how meandery this post is, and warn you that if you never got far in the game, you might want to avoid the spoilers. If you don't want to read all of it, please read the "What is wrong with CDDA" section, and the "tl;dSummary" one, they are the most important outline of what I'm talking about, the rest can be a bit incoherent/implausible.
I would also like to ping mlangsdorf, and kevingranade, as well as Raskov75 and TechnicalBen who have shown interest in this topic when I mentioned it in another thread a few days ago.
I have difficulty keeping my mind on track on my own, so if you asked me pointed question, I could probably come up with better than the idealized thoughts below.
I would be grateful to anyone who reads it.

What is wrong with CDDA.

In a way, I think that CDDA is a game that kinda hinges entirely on its complexity and amount of content, rather than utilizing it cleverly. I absolutely adore some aspects of it: The way crafting supports alternative materials to add depth to resource management, the systemic repaireinforcement/modification of some items, how much you can do with vehicles, and I truly love the earlygame, and I loved figuring the game out too, but I wish it had lasted longer, far longer. Once you know what to do, the game loses most of its depth.
First off, the problem with progression in cdda is, rather than a set or graph of fuzzy progression milestones, that you can revisit and do better, it's more of a checklist. Of tools, of books, of skill levels, and sadly, most of that reduces to an extremely routine process of surviving the earlygame, and then just accumulating books+tools and enough food to coast by, until you're ready to level up and leave the early(and mid) game behind. And most of that progression reduces to a single central measure. You either get stronger through an action, or you don't, there is mostly no real "sideways" progression.
It's a common complaint I have with RPG games, and admittedly, Cata does far better on this front than they do, but it's still kinda bad, especially starting a new character - in a game with long-term progression like CDDA, when starting a new character, you have two options, either go through the same methodology from scratch, or... yup, just read your last character's books before butchering it for bionics. Neither is great fun.
And furthermore, as you progress, you just... leave content behind. You quickly reach a point where normal zombies, and even the brutes, mean nothing to you, much less the animals or the woods. Most of the world just goes off your mental map, as irrelevant. From that point, there is nothing you have genuine "reason" to do, beyond just your own whim. Once you know how to stay safe, the main endgame location, labs, are honestly trivial, and once you figure a certain item out, they stop even being capable of posing a risk unless you get brutally careless.

What I'd do instead

And some of these changes are gonna be... major, some implausible at this point into the game's development. Nonetheless, please treat the below as food for thought, rather than anything more definitive. I also struggled a lot to order and organize this, so forgive me.

1. Skills

First off, and you'll see why I'm proposing this in subsequent sections: IMO, skills would be far better, if they were split into individual "microskills", e.g. Electronics would be a "field", rather than a "skill", which would contain individual subskills such as soldering, signal processing, power, basic/intermediate/advanced circuit theory, microprocessors, bionics, etc.
Furthermore, rather than have a single level, each skill would have three sequential components, the proportion of each depending on the skill in question: Concept, theory, practice. A well-educated human, for example, might know the concept behind basic mechanics, and thus be able to - eventually - improvise upon it, or figure out the outline of basic electronics by studying an advanced book, but on the other hand, just reading an electronics 101 doesn't instantly make you an expert on soldering.
And yes, I'm aware that that sounds like a huge pain in the ass to manage, which brings me to the rationale behind it: I think that expecting and requiring a sole survivor to become fully self-sufficient and capable of all, on their own, is batshit, which brings me to the second point:

2. Survivors

2.1. Interactions and knowledge.

IMO, in an open-ended game, there is only one way to do dialogue, namely through a topic system much like Morrowind's, where your top-level interface uses fixed hotkeys, for main "verbs" such as "Talk about...", "Tasks", "Trade", "Training"(both ways), "Rules", "Goodbye", and then subscreens which feature the actual options, where you should be able to ask the NPC about cities they have visited, or the one they come from, to gather information, about other landmarks or creatures/species/people they encountered.
The system does not need to be elaborate, but it needs to be organized, and capable of supporting simple systematic communication of knowledge, ideally both ways, as well as how it affects your reputation. Caves of Qud has a great system1
Aside from skills, NPCs should have other "knowledge", such as that about cities, creatures, or that you murdered their companion and they hate you for it, or that they have a health problems they need you to fix(or try to fix themselves, if they spot the right item), that interact and affect their behaviodialogue in at least basic ways. I have no idea how far such a system could be taken, so I'll not propose anything further.
1 possibly based on a long-ass suggestion post I pitched to the dev years ago, but I'm very probably just giving myself airs

2.2. Pooling resources together

Instead of singular player characters that exist in a vaccuum, fully capable of becoming an expert at everything through the previous character's books, I would base the game itself around creating a faction of NPCs with distinct backgrounds and skills, and the ability to learn and teach each other. Many crafts would take more time, but rather than being executed by the PC, they would be done by the NPC, who would slowly become masters of their craft, and when you die, the accumulated knowledge survives not through books you've got around, but through other characters who have polished those skills.
After death, you would be able to switch to another character of your faction - and have to deal with their traits and quirks, would probably be pretty fun as well. It would also mean that "succession" can't instantly make you OP again through books, and despite losing less, you would have to invest more than just boring grind into regaining what you lost. Being able to switch between characters during a run, could potentially also be fun.
Furthermore, this would give a good reason to create bases, not by gating certain crafts or speeding tasks up behind NPC factions, but by giving them real, meaningful utility of being capable of much the same things as you, except in the background so you don't have to grind manually for days. Instead of leveling a single survivor up into a walking death machine capable of every craft, you'd be doing what humans have always done naturally: Pooling resources together, and advancing as a "society".
And bases bring me to my third point:

3. Static vs mobile bases.

3.1 Static bases.

And the "vs" here is more to highlight the fact that there is simply no competition. Not only is vehicle building more fleshed out, but also capable of more, with less hassle, and on the move. Even if you wanted to avoid vehicles, there are no static alternatives: Fridges don't work, ovens don't work, there isn't welding rig or UPS furniture, no power grids, convenient liquid storage, or.. anything, really.
I think that the game would be much more fun, if the player had both the ability and reason to "colonize" buildings, both earlier and later on. The ability to drag some freezers, fridges, ovens together, connecting them to a generator, or some other local non-vehicle source of power, would provide a new aspect of the game. Right now, even if you decide to build a base, there is extremely little you can do with it, majority of what you build is just cosmetic, honestly.
Ideally, static constructions would be "modular" like vehicle tiles, like being able to install curtains over metal bars or a door frame or run wiring through walls, or replace an oven's power cord with a wireless replacement or internal generator... possibly even make engines/etc. generate multiple resources, e.g. heat as well as horsepower.
I also think that all objects in the game should follow the same overall durability systems: A combination of static tiles' damage absorption, vehicle parts' HP, and items' durability levels. Like I said, many things that would be a huge PITA to change, at this point.

3.2. Vehicles:

Aside from the mentioned above durability change, IMO, vehicles would be much better off, if they needed transmission axles, wiring, and piping. This way, merging two vehicles through any kind of connector could keep them separate, while also imposing more constraints on vehicle construction, leading to the process being a bit more involved, and the ability to make components interact with each other in a slightly more systemic way - now faucets connect to the tank they're connected to. What happens if you use alcohol for coolant?
But of course, the most important thing with regards to progression is:

4. Crafting

4.1. Success and progress.

One thing I would change is, instead of a sort of... ambiguous mechanic of "You resume your task", I would create temporary "unfinished " items for in-progress crafting, of any kind.
Second, I think that craft success/failure is too binary, and I would replace it with a system, where you are given the stated chance of crafting what you want, and rather than failing at the end, at some point you can get a prompt "You have made a mistake and wasted %nx %material, use another and continue?", so that even at far lower skill levels - as long as you know the concept/theory - you can eventually craft what you want, in a semi-deterministic manner.
Thirdly, whenever you waste, destroy, etc. a component/item, it should fall apart into "breaks into" items, rather than vanishing from existence. A lot of those scraps should be useless, but I am opposed to objects vanishing out of existence on principle, especially when it contributes to a "hoard until you get the maximum use out of your resources" dynamic in terms of crafting.

4.2. Components and item modification

I firmly believe that part of what makes vehicles amazing, is the way you can compose different available components, figure out what you can make with them, and how to achieve it, and gun/clothing modification is also fun, but...
In terms of CDDA: I think that those modifications should also be blueprints, and that there should be more of them, based on a twofold system: Modification capacity, and modification consequences. For example, a coat might have 0/2 lining, 0/4 padding, 0/1 coating slots, and each filled slot results in extra encumbrance based on both the item's suitability for modification and the specific mod you do. You should be able to add thermoelectric lining to items, "coat" it with rain-resistant filament, pad it with both some kevlar and extra pockets, e.g. tailor your own gear yourself. IMO, as many items as possible should be the "basis" for the player to work on, rather than a final end-goal, like the survivor clothing.
Wouldn't it be fun to make your own, custom survivor suit out of the best items you can find, rather than just rush towards some single goal craftable? What if you could add nails to wooden weaponry as a mod, electrify any melee weapon, serrate the blade of your trusty kukri, or coat your arrows in poison?
In terms of a game I'd make: I would make as many items as possible the sum of their parts, rather than a single static object, e.g. give every item a specialized "inventory" for components. Those components would be stuff like spark plugs for engines, stock/sights/etc. for firearms, different types batteries for electronics, CPUs, a battery compartment(to replace it with a corded/ups/etc. one), an accumulator or a betteworse sawblade.. point is, you should be able to juryrig and improvise over broken components, pool items together for parts, and repair of furniture, items, objects, could become a more involved process than "do I have the right tool and material chunk to repair".
A good example would be being able to create a battery cell of several individual sub-cells, e.g. make the first one a remotely rechargeable UPS sub-battery, then two normally rechargeable ones, and finally a plutonium mini-battery, in the case you really need your tool for an emergency.

4.3. Recipes

First off, I think that all types of blueprints should be consolidated, into the same overarching system, so they can make use of features implemented for each other. Also feel free to read the tl;dr of this section first.
Features such as for example, extending qualities from tool qualities only, to component qualities. E.g. not "bone glue or glue or duct tape", but "mquality: adhesive: 1", as well as the ability to define some components as affecting the end result's properties: Weight, durability, how handy it is to use as a tool. Ideally, those qualities would have more than a single value, which would depend on the quality itself. For example, the "fabric" quality would feature encumbrance, durability, protection values.
Some tools might be faster than others, some might impact craft success probability negatively. Ideally, that would be indicated through a relatively simple interface, like (150% speed, 90% success) after the selected tool.
Alas, at this point reworking recipes like this would be... impossible, pretty much. It's something that'd need to be tested from scratch, carefully adjusted, and figured out, to avoid bogging the player down. I am leaning towards having multiple-stage processes like construction, where individual tools/materials affect a specific stage, and the properties of the final object are defined through either a simple domain specific language, e.g. durability="min(mat1.adhesive/3, 1) * mat2.hardness * 10". OR simpler and perhaps better, mqualities could just have a numerical rating indicating how good they are for that purpose(e.g. as a bar, as armor, or as meat), and their contribute either to craft success, craft speed, or whichever property the current craft stage governs.
tl;dr: Perhaps this would need to be sawed down and simplified, but the premise here is: I would like to give the player actual reason to stay on the lookout for better tools, materials, and components, and only part of it as a "checklist" of things to find, with plenty to figure out and improvise on your own. Rather than making a survivor winter coat, why not figure out which animal's fur is the warmest, and line your greatcoat with it? Find and pursue the solution yourself, especially when it means adapting to this strange new world.

5. The environment.

5.1. Dynamic environment

What I would do here, is create the notion of "groups" of zombies, animals, or survivors, which have some very basic AI simulated on the world map, that is only realized into actual herds/lairs/buildings once you're close enough. You should be able to realize that there's been giant bees raiding you recently, and that that means there has to be a new nest nearby, that wolves have wandered close, and probably have a lair, or find migrating ants on the way to establish a new colony. E.g. a combination of "dynamic environment" and "dynamic locations" to raid/cleautilize.

5.2. Procgen improvements

First off, a small one: IMO, loot generation should be switched to first choosing an item or bundle of items, and then allocate it into containers, so that if a gunstore generates a 9mm firearm, it also generates a magazine for it, and a stack or two of 9mm ammo. It could also be used to create "types" of say restaurants, independent from the actual building.
Second off, rather than choosing a random building, IMO, there should be more instances of a part of a building being chosen randomly from a few variants with different layouts.

5.3. Challenge and combat.

Needs to be toned way down in terms of vertical progression, albeit... one way in which lower-level enemies could stay relevant, would be to adopt a HP system like Exanima's, where you can take either "hard" damage(cutting/piercing/hard bashing), or "soft" that regenerates fast-ish on its own(absorbed by armor, glancing blows), so that even if your armor absorbs majority of damage, you still take some.
I think that doing this would make it possible to reduce zombie counts(which are annoying as hell), without sacrificing how dangerous they are.
In fact, I'd even go as far as say have soft/hard/critical damage, with the last being extremely difficult to heal, so that extremely high-end enemies like turrets, rather than killing you, instead cripple you for a while with really tough to heal critical-type damage.
I'm not gonna talk about nerfing vehicles, because I think that the need for that is very self-evident. Unless it's intended that you can roll through anything, anywhere, be it a chicken or a tank drone.

6. tl;dr/Summary

Basically, the outline of my thoughts comes down to shifting the progression from a central measure of how strong your character is, to something both more open-ended, and touching upon more game mechanics than currently, as well as factoring the "inevitable" inheritance of a run into the core gameplay loop, in a way that makes sense in a roguelike context, and adding more depth - even if most of it would be utilized very little - to the crafting of items, bases, vehicles, and other objects. I would like to give the world around the survivor more relevance, and reasons to interact with it.
Currently, the game has incredible amounts of content, but the vast majority of it gives the player no reason to care about it, and what you care about reduces to a very one-dimensional measure of how far along you are - there's just skills, gear, and vehicles, and most of that is defined by which books you have access to. Instead of a "how does this content factor into my options?", you only ask yourself a binary "does it?"... and the answer is usually a no, especially as you get further in the game.
And that, is not only boring, but leads to the issue of power creep: Because there is only a single axis to progress on, to be relevant, content has to make you "stronger", and since all falls on that axis, the stronger you are, the less of the game is relevant to you. At some point once you know what to do, it's just a grind.
And I think that the game could do far better than that, if it focused on how many distinct things surviving entails, especially multiple humans coming together and the continuous process of adapting to the environment and utilizing the new, extradimensional objects and creatures. The world has essentially ended, with all its military might, you're supposed to be surviving in that world, not becoming its new God. And as long as the only goal is to "survive and increase your combat capability", every new addition and change to the game will do nothing towards guiding it towards becoming a better game.
Or, basically, the game needs to stop being about a single central measure of progression. Preparing yourself for the wintecold environments should be separate from preparing yourself for facing robots, which should be separate from surviving zombies, which in turn should be governed by a different metric of progression than maintaining a food supply, preparing for the worst(death of your character), tweaking your gear, and more of the game should be a process of continuous improvement, rather than ticking items off a checklist. Modular content would go a very far way in this respect, imo.
That's what I mean when I say the game has deep flaws that I think are unlikely to be corrected. And I know that my post is incoherent and at times extremely ambitious... I just... find it difficult to collect myself better than that. Please do not be too mean.
And if you have any questions, please ask me, I am confident in my ability to come up with, if not answers, then at least food for thought. I am well capable of coming up with less ambitious proposals than the stuff here, I just... idk, I had to dump the contents of my brain first.
I will do more thought on actual, more modest, change proposals as I continue my current run, and open a few issues, or make another megapost with a collection of the small things mainly.
submitted by derpderp3200 to cataclysmdda [link] [comments]

GOD OF INDICATORS - 99,99% work - binary option strategy ... The easiest binary options indicator for beginners 2020 ... The Best Binary Option Non Repainting Arrow Pop Up Indicator Bollinger Bands Updated ARROW mt4 indicator for BINARY OPTION & FOREX 100% FREE LIFETIME! Binary Options 60 Seconds Indicator 99% Winning Live ... Arrow Indicator IQ Option Real Account(Free Download) IQ Option best 95% Perfect signal Indicator Attach With ... ADX crosses arrows Non repaint mt4 indicator  free ...

Binary Arrow Indicator – simplest indicator for Binary Options trading.. RED arrow = PUT Option GREEN arrow = CALL Option. Example of Chart with Binary Arrow indicator. There aren’t too many signals and therefore traders need to have multiple 5 minute currency charts of different pairs in order to reap the maximum benefits from this strategy. No repaint Binary options indicator 95 accurate system. On the graph it additionally shows every day open line and the level of pivot, which you could additionally use to locate support and resistance tiers. Domestic » fine binary alternatives indicator remaining fashion indicators Exceptional binary options indicator remaining trend alerts. Download a huge collection of Binary options strategies, trading systems and Binary Options indicators 100% Free. Get your download link now. Binary Options Indicators. In this category are published only the best and most accurate binary options indicators. All binary options indicators on this site can be downloaded for free. Most of them are not repainted and are not delayed and will be a good trading tool for a trader of any level. As we are using this indicator for the binary options, We need to use 1-minute chard and each trade should be 2-3 minutes expiry. All the major currency pair works best for this indicator. You can use any binary options brokers to trade with the help of this indicator. Binary.com who is the pioneer of binary trading recently introduce deriv.com ... Binary Options Arrow mt4 Indicator free download How to install Binary Options Arrow Indicator in forex trading platform metatrader 4? Extract the downloaded Binary Options Arrow Indicator.rar . Go to “File menu” in Mt4 trading platform and click “open data folder”. Open Mql4 folder and open the indicators folder. binary options indicators. Best mt4 arrow indicator meta4. You can also buy or sell the arrow non repaint signals to other trade and for this you need to add up with the downtrend policy that will lead you to the best of the best resource gear to the future trending trades. So here we discuss about the some of qualities of theMT4 Arrow indicator no repaint. Hope you find the some good ideas ... Related MetaTrader Indicators. Binary Options Arrow Indicador; Binary Options Channel; Mak Binary Options Template; Win Win Binary Options Template; Binary Options Signal Indicator; Binary Options Trader; Binary Options Calculator; Binary Options System; Binary Options Signals; Binary Options Buddy 2.0; Binary Options Oscillator; Simple Binary ... Interval of display arrow near signal - arrow offset interval under/above signal candle; ... Using binary options indicators for timeframes from M15 and higher is recommended. You should remember that higher timeframes mean better signals. If the indicator shows a possibility lower than 65%, then using it for the given currency pair and timeframe is not recommended until the situation on the ... Free UOP Binary Options Indicator. Are you looking for the famous UOP binary options indicator? Download it here for free but first take a look at how it works. The UOP system consists of 8 trading indicators, some basic and some advanced indicators. Just download the file below and add them all to the Metatrader 4 platform.

[index] [19158] [10435] [996] [5032] [14295] [2971] [26713] [17164] [4128] [3931]

GOD OF INDICATORS - 99,99% work - binary option strategy ...

#IQOption #Trading #Professional Telegram Group : (Expert Binary Trader JD) Group Link : https://t.me/johndaviesgroups https://t.me/s2fmoretrader Hello Trade... Most Accurate Non Repaint 1 Minutes Binary Trading Indicator🔥 Metatrader 4🔥 Free Download-2019🔥🔥🔥 - Duration: 7:13. AM Trading Tips 16,638 views 7:13 Hi Friends I will Show This Video Binary Options 60 Seconds Indicator Signal 99% Winning Live Trading Proof -----... Only use this 100% FREE indicator for BINARY OPTION and FOREX Manual tradings and MT2 use. Next candle trade. Non repaint, Wait untill stable arrow. ***YOU MUST PRACTICE THIS TOOL VERY MUCH and I ... pinry option99% All about Trading in Forex and Binary Option Marked. #iqoption#olymptrade#pocketoption#forextime Registration link iqoption https://bit.ly/2W... The best non repainting arrow indicator for binary options trading. 95% accuracy of all time. get trading bots contact with telegram https://bit.ly/3aR8baT get pro or free signals https://bit.ly/2N5PLrp get strategy trading, visit my twitter https://b... ---How to Trade Binary option Automatically without Mt2trading : https://www.youtube.com/watch?v=g44QmfuS2EE -mt2trading 10% dicount : http://bit.ly/010101aa...

http://binaryoptiontrade.lilecba.gq