Does the Windows 8 installation stall at the disk formatting screen? Do you get this error: “We couldn’t create a new partition or locate an existing one.  For more information, see the Setup log files”?

There are a couple of steps to try.

First, disconnect all external HDD, card readers, and take out all SD cards currently in the system. Reboot and try the installation again.

If that does not work, try this process, taken from TechNet:

  1. Once the setup fails to find the partition, just close the setup window (the top-right-hand side red X does the job).
  2. From that point, you should be brought back at the initial setup screen.Choose “Repair” then go to the advanced tools and start the command line.
  3. Start DISKPART.
  4. Type LIST DISK and identify your SSD disk number (from 0 to n disks).
  5. Type SELECT DISK <n> where <n> is your SSD disk number.
  6. Type CLEAN
  7. Type CREATE PARTITION PRIMARY
  8. Type ACTIVE
  9. Type FORMAT FS=NTFS QUICK
  10. Type ASSIGN
  11. Type EXIT twice (one to get out of DiskPart, the other to exit the command line tool)

While updating my RPi for a Cloudflare auto-ip-updater script, I ran into an issue where the dig command couldn’t be found. Also, the dnsutils package containing dig was not in the Raspbian repository. So, you can easily replace the dnsutils package with the knot-dnsutils package, and you get dig back. Install with sudo apt-get install knot-dnsutils.

 

Look for more here: 1, 2, 3

About Boot Mode

The Raspberry Pi 3 can now be booted from a compatible USB drive. It works pretty well, but there are no noticeable differences in speed between a good SD card and a good USB drive. Primarily, cost may be a factor, as good USB drives are generally cheaper than good SD cards. Not all USB drives are supported–my Sandisk Ultra Fit worked, but my Trancend JetFlash did not.

Read the official documentation to learn how to set it up.

 

Boot Mode and Updates

I used apt-get update/upgrade to update my Raspberry Pi, which wiped out the boot mode. Make sure not to upgrade the bootloader, or save an edited bootable sd card to reapply the boot mode edits.

 

sudo mkdir /mnt/target
 sudo mount /dev/sda2 /mnt/target/
 sudo mkdir /mnt/target/boot
 sudo mount /dev/sda1 /mnt/target/boot/

cd /mnt/target
 sudo mount --bind /dev dev
 sudo mount --bind /sys sys
 sudo mount --bind /proc proc
 sudo chroot /mnt/target

sudo BRANCH=next rpi-update

exit
 sudo umount dev
 sudo umount sys
 sudo umount proc

sudo reboot

Official forums also have some more information.

While trying to troubleshoot some JetPack issues today, I ran into a new WordFence issue. A couple, actually. Here are the problems and the solutions:

WordFence does not cache WordPress anymore.

They removed the Falcon cache. Use CloudFlare instead!

Source

The WordFence Firewall page pops up an error.

You might see this error:

We were unable to write to ~/wp-content/wflogs/ which the WAF uses for storage. Please update permissions on the parent directory so the web server can write to it.

This is because WordFence added a new wflogs folder, which could have the wrong permissions set. Navigate to the folder and set the correct permissions.

cd /var/www/(PATH TO WORDPRESS)/wp-content/
chmod 755 wflogs
chown (WWW USER):(WWW GROUP) wflogs

Also, some users running wp-cron as a scheduled Linux cron job instead of on a per-access wp-cron job may run into an error where the cron job resets permissions to root. Remove the cron job from crontab and add a cron job in cron.d

cd /etc/cron.d
sudo nano wpcron

### Call wp-cron every 10 minutes
*/10 * * * * www-data /usr/bin/wget -q "http://www.tgmgroup.org/wp-cron.php?doing_wp_cron"

CTRL-X / y

Source

The WordFence Firewall wants me to use Extended Protection.

Extended Protection loads the firewall before loading any WP functions or plugins, making it more secure. Apache may only require an .htaccess rule that WF probably self-generates. You need to adjust your Nginx configuration to hide the .user.ini file, though.

location ~ ^/wordpress/\.user\.ini {
    deny all;
}

Source

 

Security@Georgeliu.me

CrashPlan is a great data backup service. It has some drawbacks, such as high memory use and slow speeds (being based on Java), but it has some terrific benefits as well. It is about the only program that allows you to store data on multiple computers and on the computers of friends, a free option.

However, I recently ran into a problem with CrashPlan: something was causing CrashPlan to see the folder as 0 bytes instead of several gigabytes. It could not see the data in the folders, and as a result, no data was being backed up.

So what went wrong? I initially thought it was something to do with security settings and file permissions. I searched Google for a while and came up with some hints, such as taking ownership of files and such. For that task, using Ultimate Windows Tweaker allowed me to easily add the correct registry extension to the right-click context menu and take ownership of all files.

That didn’t fix anything. I was still pretty sure it was a file permission error, so I looked at the base drive, and made sure all accounts (Myself and Everyone) had the necessary permissions, with the correct option selected (applying these permissions to “this folder, subfolders, and files”).

It still didn’t work, so I played around in CrashPlan to see what was going on. Only some folders were not capable of being backed up. And as it turns out, the only folders that CrashPlan could not access were the ones being shared with the Windows 7 HomeGroup. Here was a big hint, and I eventually followed these steps:

  1. Disable “Password protected sharing” (Control Panel>All Control Panel Items>Network and Sharing Center>Advanced Sharing Settings\Password protected sharing)
  2. Add “Everyone” account to necessary folders and give them “full control” permissions (Right-click folder, Security\Advanced\Change Permissions\Add\Allow Full Control)

I am not sure that step 1 is necessary, but I believe that because security was enabled, Windows cut down access from Myself and Everyone to just Myself and HomeUsers (HomeGroup). Either way, the most important thing is this: CrashPlan needs to have an Everyone account associated with the folders to back up (and the necessary permissions to read the folder). Otherwise, backups may not work.

But funnily enough, another shared folder that is backed up in CrashPlan has Myself, Administrators, HomeUsers, and SYSTEM user accounts attached to it. Maybe SYSTEM is enough?

 


Comments

Previous comments imported from a different version of this blog.

Hint: use System instead of Everyone for security reasons.

I had this same problem and solved it. I moved an external hard drive from a Win XP to a new pc with Win 7 and found Crashplan saw nothing under a directory with 300 gig. Yet other folders on the same drive did not have the problem. That directory with 300 gig has been shared previously. I found I had to add SYSTEM to the permission so that Crashplan could see the entire directory structure underneath it. After more digging, adding Everyone will have the same effect. — CharlieI didn’t have to give everyone FULL permission, just read permission. But the trick was to disable the Password Protected Sharing. THANK YOU!…saved me a lot of grief! — Steve

I found success by adding SYSTEM account. Not full control and with password protected sharing turned ON. — Sean

It was option 1 for me – the password sharing. Disabled it and crashplan immediately worked again. No idea why this happened or what caused it though. — Nicholas

I had the same problem on Windows Server 2012. CrashPlanPRO wasn’t backing up shares I had changed the security settings on. Adding the SYSTEM user and giving it read and execute rights did the trick. Thanks — Josh

 

 

While working on my Raspberry Pi, I got this error after SSHing in: Could not chdir to home directory : No such file or directory

The benefit of a home directory is that commands are logged, so that the next time one logs in, an up key press can bring up previous commands. Therefore, it would behoove me to fix this error.

This error may indicate that the directory does not exist, or that it’s not linked in the user file at /etc/passwd.

I first tried this command: usermod -d /home/USERNAME sudouser

But it doesn’t work when you are logged in through SSH as the user you want to change! I killed the process,  with kill, which logged me out and did nothing–except restart my getty/ssh process. Apparently, this would not work, because my session was affecting my ability to edit things.

So I opened up the user file with: sudo nano /etc/passwd

I added the home directory to my user, which looked something like this:

pie:x:1000:1001:Pie,,,:/home/pie:/bin/bash

The important parts are at the back. You are using the Bash shell and running from the /home/pie directory. Note the number of colons used!

I edited the file and started another ssh session, but I couldn’t login as usual and I got this error: Could not chdir to home directory /bin/bash: Not a directory

What was the problem? Apparently, I didn’t use the correct number of colons. I had a “::” after my directory instead of just “:”. Remember to test your settings without logging out!

Problem solved, after a while.

 

 

References – 1, 2, 3

Security@Georgeliu.me

A mounted Box.com drive and duply are a good combination for backup data to a remote location. Duply allows for encrypted backups, and the encryption also helps to prevent bit rot. A remote drive is useful for keeping cloud-based servers free of extraneous data, which is especially useful when paying per GB per month for storage.

My problem started after I upgraded Ubuntu. Apparently, I used a the default configuration for davfs2 instead of my custom configuration, which caused a few errors. But I also learned a few things:

To use Box.com, you must have an appropriately sized cache and disable file locking in the davfs2 config.

To set this up, edit the config: sudo nano /etc/davfs2/davfs2.conf

Add these two lines anywhere (preferably under the commented out section or at the bottom):

use_locks 0
cache_size 100
  • File locking should be disabled for Box, as it is not compatible. Files can be written to on the server while they are being uploaded to Box, and vice versa, but your timed sync/backup strategy should prevent this.
  • Box.com is limited to files of around 50 MB, and duply defaults to 25 MB, so a cache of 100 MB is certainly safely over the limits.
If a mounted drive cannot be written to, the davfs2 cache will grow (very large).

In my case, it was up to around 9 GB. Check to see what’s going on with journal ctl: sudo journalctl -b | grep mount or something like journalctl -u davfs2 --since today. You might see a message like open files exceed max cache size. Then check the configuration of davfs2 to increase the cache size and see if locking is enabled, as detailed above.

The davfs2 cache can be deleted safely if a drive is unmounted.

Unmount using umount /LOCATION/NAME

If a drive is busy, it can’t be unmounted. You can force an unmount (with chance of data corruption) using umount -f or umount -l, but it’s better to identify and kill the processes using the drive:

  1. lsof | grep '/dev/sda1' (change /dev/sda1 to the mounted drive name)
  2. pkill target_process (kills busy proc. by name | kill PID | killall target_process)
  3. umount /dev/sda1

How do you set up a duply and Box.com backup? I think I wrote about it before, but if not, here are some links: 1, 2
Other resources: 1, 2 , 3, 4, 5

 

 

Continue reading

The Raspberry Pi 2 (and 3) apparently has a problem with mounting USB drives at boot. This is probably due to the slow spin-up speed of HDD-based externals and the relatively fast boot of the Pi.

How to fix a Pi mounting error:
  1. Edit the Pi boot config: sudo nano /boot/cmdline.txt
    1. Add this line:
      rootdelay=5
    2. Change the root delay time from 5 to whatever option works for you (probably between 0 and 60 is best)
    3. Reboot: sudo reboot
  2. Edit the system boot config: sudo nano /etc/rc.local
    1. Add these lines:
      sleep 30
      sudo mount -a
      exit
    2. Reboot: sudo reboot

The second method worked for me.

 


 

When I connected my drives, I also had different functions and roles for them. The way they are now, I have a mirror of a data drive and a SnapRAIDed backup. So I needed to change the drive names in order to identify the drives.

How to change drive names in Linux:
  1. Identify the drives mount points: sudo lsblk  or sudo mount or sudo fstab -l
  2. Identify the drive names: sudo lsblk or sudo xfs_admin -l /dev/sdb1
  3. Unmount the drive: sudo umount /dev/sda1
  4. Rename the drive: sudo ntfslabel /dev/sda1 NAME for NTFS or sudo xfs_admin -L NAME /dev/sda1 for XFS
  5. Remount the drive: sudo mount /dev/sda1

You must use the specific drive tool for the different file systems of drives. You cannot use an NTFS drive renaming tool on an ext4 drive, for example. See more at Source 2.

Source 1, 2, 3

I keep getting a weird error where Fail2Ban refuses to start due to a bad log location. It looks like this: fail2ban-client[##]: ERROR No file(s) found for glob /var/log/mail.warn

Looking at /var/log/, there are no mail.warn logs. PostFix only logs to mail.log and mail.err.

So what to do is to edit the jail.conf or other local jail files to point to the mail.log and mail.err files, although whichever the actual, correct log file to use is not clear for me.

Anyway, here is the code:

  1. Edit: sudo nano /etc/fail2ban/jail.conf
  2. Find and comment out: #logpath = %(postfix_log)s
  3. Insert: logpath = /var/log/mail.err

Then restart Fail2Ban (sudo service fail2ban restart) and this time it will hopefully work. Whether it works effectively or not will be affected by the choice of log files, I guess.

References: 1

I finally got around to reading my PostFix logs, and I found a few big errors. One of them was fatal: address(##): message file too big

The problem is probably in these two settings in the PostFix main config: message_size_limit and mailbox_size_limit

Check the configuration using:

postconf | grep size
postconf -d | grep size

where the -d is to check the default size.

Fix the error by editing the file using:

Method 1:
sudo postconf -e mailbox_size_limit=0
sudo postconf -e message_size_limit=0
Method 2:
sudo nano /etc/postfix/main.cf

Either add these lines or edit existing ones

message_size_limit = 0
mailbox_size_limit = 0
Restart
service postfix restart

Sources: 1, 2